I0102 16:33:05.522588 8 e2e.go:224] Starting e2e run "8cdeb96d-2d7d-11ea-b611-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577982784 - Will randomize all specs Will run 201 of 2164 specs Jan 2 16:33:06.510: INFO: >>> kubeConfig: /root/.kube/config Jan 2 16:33:06.521: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 2 16:33:06.572: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 2 16:33:06.636: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 2 16:33:06.636: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 2 16:33:06.636: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 2 16:33:06.652: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 2 16:33:06.652: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 2 16:33:06.652: INFO: e2e test version: v1.13.12 Jan 2 16:33:06.653: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:33:06.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Jan 2 16:33:06.806: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-42798 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-42798;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-42798 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-42798;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-42798.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-42798.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-42798.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.145.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.145.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.145.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.145.246_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-42798 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-42798;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-42798 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-42798;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-42798.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-42798.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-42798.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-42798.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.145.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.145.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.145.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.145.246_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 2 16:33:25.080: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.095: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.103: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-42798 from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.111: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-42798 from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.153: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.181: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.194: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.199: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.204: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.209: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.213: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.216: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.218: INFO: Unable to read 10.96.145.246_udp@PTR from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.227: INFO: Unable to read 10.96.145.246_tcp@PTR from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.232: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.236: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.241: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-42798 from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.247: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-42798 from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.252: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.255: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.258: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.262: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.264: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.268: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.272: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.277: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.280: INFO: Unable to read 10.96.145.246_udp@PTR from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.285: INFO: Unable to read 10.96.145.246_tcp@PTR from pod e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005) Jan 2 16:33:25.285: INFO: Lookups using e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-42798 wheezy_tcp@dns-test-service.e2e-tests-dns-42798 wheezy_udp@dns-test-service.e2e-tests-dns-42798.svc wheezy_tcp@dns-test-service.e2e-tests-dns-42798.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.145.246_udp@PTR 10.96.145.246_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-42798 jessie_tcp@dns-test-service.e2e-tests-dns-42798 jessie_udp@dns-test-service.e2e-tests-dns-42798.svc jessie_tcp@dns-test-service.e2e-tests-dns-42798.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-42798.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-42798.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.145.246_udp@PTR 10.96.145.246_tcp@PTR] Jan 2 16:33:30.515: INFO: DNS probes using e2e-tests-dns-42798/dns-test-8e3a41f3-2d7d-11ea-b611-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:33:31.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-42798" for this suite. Jan 2 16:33:39.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:33:39.526: INFO: namespace: e2e-tests-dns-42798, resource: bindings, ignored listing per whitelist Jan 2 16:33:39.552: INFO: namespace e2e-tests-dns-42798 deletion completed in 8.49991951s • [SLOW TEST:32.898 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:33:39.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0102 16:33:42.986261 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 2 16:33:42.986: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:33:42.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b7bxg" for this suite. Jan 2 16:33:51.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:33:51.354: INFO: namespace: e2e-tests-gc-b7bxg, resource: bindings, ignored listing per whitelist Jan 2 16:33:51.396: INFO: namespace e2e-tests-gc-b7bxg deletion completed in 8.395118856s • [SLOW TEST:11.843 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:33:51.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-b99q4 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-b99q4 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-b99q4 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-b99q4 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-b99q4 Jan 2 16:34:03.956: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b99q4, name: ss-0, uid: aff1cfd1-2d7d-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 2 16:34:04.170: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b99q4, name: ss-0, uid: aff1cfd1-2d7d-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 2 16:34:04.195: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b99q4, name: ss-0, uid: aff1cfd1-2d7d-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 2 16:34:04.347: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-b99q4 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-b99q4 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-b99q4 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 2 16:34:22.911: INFO: Deleting all statefulset in ns e2e-tests-statefulset-b99q4 Jan 2 16:34:22.936: INFO: Scaling statefulset ss to 0 Jan 2 16:34:33.029: INFO: Waiting for statefulset status.replicas updated to 0 Jan 2 16:34:33.035: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:34:33.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-b99q4" for this suite. Jan 2 16:34:41.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:34:41.369: INFO: namespace: e2e-tests-statefulset-b99q4, resource: bindings, ignored listing per whitelist Jan 2 16:34:41.620: INFO: namespace e2e-tests-statefulset-b99q4 deletion completed in 8.538955858s • [SLOW TEST:50.223 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:34:41.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-c6e32475-2d7d-11ea-b611-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 2 16:34:42.048: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-v6t62" to be "success or failure" Jan 2 16:34:42.065: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.103155ms Jan 2 16:34:44.081: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032149128s Jan 2 16:34:46.098: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049735928s Jan 2 16:34:48.145: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096454487s Jan 2 16:34:50.182: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133269211s Jan 2 16:34:52.225: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176308488s STEP: Saw pod success Jan 2 16:34:52.225: INFO: Pod "pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005" satisfied condition "success or failure" Jan 2 16:34:52.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 2 16:34:52.396: INFO: Waiting for pod pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005 to disappear Jan 2 16:34:52.510: INFO: Pod pod-projected-configmaps-c6e5a995-2d7d-11ea-b611-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:34:52.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v6t62" for this suite. Jan 2 16:34:58.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:34:58.626: INFO: namespace: e2e-tests-projected-v6t62, resource: bindings, ignored listing per whitelist Jan 2 16:34:58.700: INFO: namespace e2e-tests-projected-v6t62 deletion completed in 6.172498679s • [SLOW TEST:17.080 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:34:58.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 2 16:34:58.935: INFO: Waiting up to 5m0s for pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-cxjbq" to be "success or failure" Jan 2 16:34:58.944: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.281117ms Jan 2 16:35:01.234: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298418587s Jan 2 16:35:03.241: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305484727s Jan 2 16:35:05.714: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.778791008s Jan 2 16:35:07.728: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792278969s Jan 2 16:35:09.746: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.810546241s STEP: Saw pod success Jan 2 16:35:09.746: INFO: Pod "pod-d1042faa-2d7d-11ea-b611-0242ac110005" satisfied condition "success or failure" Jan 2 16:35:09.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d1042faa-2d7d-11ea-b611-0242ac110005 container test-container: STEP: delete the pod Jan 2 16:35:10.487: INFO: Waiting for pod pod-d1042faa-2d7d-11ea-b611-0242ac110005 to disappear Jan 2 16:35:10.517: INFO: Pod pod-d1042faa-2d7d-11ea-b611-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:35:10.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cxjbq" for this suite. Jan 2 16:35:16.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:35:16.786: INFO: namespace: e2e-tests-emptydir-cxjbq, resource: bindings, ignored listing per whitelist Jan 2 16:35:16.786: INFO: namespace e2e-tests-emptydir-cxjbq deletion completed in 6.249523079s • [SLOW TEST:18.086 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:35:16.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-dbc858c8-2d7d-11ea-b611-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 2 16:35:17.003: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-z9w67" to be "success or failure" Jan 2 16:35:17.010: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586246ms Jan 2 16:35:19.021: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017336859s Jan 2 16:35:21.034: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030022307s Jan 2 16:35:23.307: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30335865s Jan 2 16:35:25.322: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318584558s Jan 2 16:35:27.343: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339215248s STEP: Saw pod success Jan 2 16:35:27.343: INFO: Pod "pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005" satisfied condition "success or failure" Jan 2 16:35:27.348: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 2 16:35:27.511: INFO: Waiting for pod pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005 to disappear Jan 2 16:35:27.549: INFO: Pod pod-configmaps-dbca0132-2d7d-11ea-b611-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:35:27.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-z9w67" for this suite. Jan 2 16:35:33.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:35:33.816: INFO: namespace: e2e-tests-configmap-z9w67, resource: bindings, ignored listing per whitelist Jan 2 16:35:33.888: INFO: namespace e2e-tests-configmap-z9w67 deletion completed in 6.236217934s • [SLOW TEST:17.101 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:35:33.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 2 16:35:34.186: INFO: PodSpec: initContainers in spec.initContainers Jan 2 16:36:42.349: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e60b1059-2d7d-11ea-b611-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-tjlcq", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-tjlcq/pods/pod-init-e60b1059-2d7d-11ea-b611-0242ac110005", UID:"e60c6c1d-2d7d-11ea-a994-fa163e34d433", ResourceVersion:"16929258", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713579734, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"186818704"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-s44bg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ca2e00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s44bg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s44bg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-s44bg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001992d48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000878120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001992dc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001992de0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001992de8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001992dec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713579734, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713579734, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713579734, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713579734, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000ba4080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00070bf80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002eef50)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://cdcb3315a5514c1718481f64ad9567e90e79d6de4c556929e3cf2e74f4d70a6d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ba4100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ba40c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:36:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-tjlcq" for this suite. Jan 2 16:37:06.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:37:06.629: INFO: namespace: e2e-tests-init-container-tjlcq, resource: bindings, ignored listing per whitelist Jan 2 16:37:06.647: INFO: namespace e2e-tests-init-container-tjlcq deletion completed in 24.27528202s • [SLOW TEST:92.758 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:37:06.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1d4f0b55-2d7e-11ea-b611-0242ac110005 STEP: Creating a pod to test consume secrets Jan 2 16:37:06.958: INFO: Waiting up to 5m0s for pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-dp5dq" to be "success or failure" Jan 2 16:37:07.107: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 149.429455ms Jan 2 16:37:09.520: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561901362s Jan 2 16:37:11.565: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60736703s Jan 2 16:37:13.600: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642196453s Jan 2 16:37:15.622: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.664506728s Jan 2 16:37:17.662: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.704605854s STEP: Saw pod success Jan 2 16:37:17.663: INFO: Pod "pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005" satisfied condition "success or failure" Jan 2 16:37:17.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005 container secret-env-test: STEP: delete the pod Jan 2 16:37:18.066: INFO: Waiting for pod pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005 to disappear Jan 2 16:37:18.084: INFO: Pod pod-secrets-1d51a0a3-2d7e-11ea-b611-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:37:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dp5dq" for this suite. Jan 2 16:37:26.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:37:26.376: INFO: namespace: e2e-tests-secrets-dp5dq, resource: bindings, ignored listing per whitelist Jan 2 16:37:26.420: INFO: namespace e2e-tests-secrets-dp5dq deletion completed in 8.320197815s • [SLOW TEST:19.773 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:37:26.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 16:37:27.071: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2939f406-2d7e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00166a57a), BlockOwnerDeletion:(*bool)(0xc00166a57b)}} Jan 2 16:37:27.307: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"29376e58-2d7e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c448aa), BlockOwnerDeletion:(*bool)(0xc001c448ab)}} Jan 2 16:37:27.345: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"29387330-2d7e-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020252c2), BlockOwnerDeletion:(*bool)(0xc0020252c3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:37:32.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8sxm2" for this suite. Jan 2 16:37:38.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:37:38.730: INFO: namespace: e2e-tests-gc-8sxm2, resource: bindings, ignored listing per whitelist Jan 2 16:37:38.845: INFO: namespace e2e-tests-gc-8sxm2 deletion completed in 6.290285448s • [SLOW TEST:12.424 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:37:38.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 2 16:37:39.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-dkv8f" to be "success or failure" Jan 2 16:37:39.082: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577963ms Jan 2 16:37:41.100: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023439376s Jan 2 16:37:43.129: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052061702s Jan 2 16:37:45.917: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840633494s Jan 2 16:37:47.939: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862186669s Jan 2 16:37:49.950: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872815368s Jan 2 16:37:51.964: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.887597449s STEP: Saw pod success Jan 2 16:37:51.965: INFO: Pod "downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005" satisfied condition "success or failure" Jan 2 16:37:51.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005 container client-container: STEP: delete the pod Jan 2 16:37:52.739: INFO: Waiting for pod downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005 to disappear Jan 2 16:37:52.758: INFO: Pod downwardapi-volume-30788f88-2d7e-11ea-b611-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 2 16:37:52.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dkv8f" for this suite. Jan 2 16:37:58.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 2 16:37:58.969: INFO: namespace: e2e-tests-projected-dkv8f, resource: bindings, ignored listing per whitelist Jan 2 16:37:59.052: INFO: namespace e2e-tests-projected-dkv8f deletion completed in 6.279783687s • [SLOW TEST:20.207 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 2 16:37:59.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 2 16:37:59.359: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 29.333306ms)
Jan  2 16:37:59.453: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 93.866772ms)
Jan  2 16:37:59.466: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.329725ms)
Jan  2 16:37:59.475: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.634878ms)
Jan  2 16:37:59.483: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.287925ms)
Jan  2 16:37:59.488: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.149953ms)
Jan  2 16:37:59.495: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.039387ms)
Jan  2 16:37:59.500: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.174826ms)
Jan  2 16:37:59.505: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.744197ms)
Jan  2 16:37:59.511: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.978128ms)
Jan  2 16:37:59.515: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.463177ms)
Jan  2 16:37:59.520: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.931559ms)
Jan  2 16:37:59.525: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.70924ms)
Jan  2 16:37:59.529: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.331052ms)
Jan  2 16:37:59.535: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.682757ms)
Jan  2 16:37:59.540: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.523804ms)
Jan  2 16:37:59.545: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.417607ms)
Jan  2 16:37:59.549: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.133512ms)
Jan  2 16:37:59.553: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.58024ms)
Jan  2 16:37:59.557: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.791602ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:37:59.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-ld2z6" for this suite.
Jan  2 16:38:05.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:38:05.721: INFO: namespace: e2e-tests-proxy-ld2z6, resource: bindings, ignored listing per whitelist
Jan  2 16:38:05.750: INFO: namespace e2e-tests-proxy-ld2z6 deletion completed in 6.187075702s

• [SLOW TEST:6.697 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:38:05.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7bfxp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 16:38:05.941: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 16:38:42.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7bfxp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 16:38:42.141: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 16:38:43.089: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:38:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7bfxp" for this suite.
Jan  2 16:39:07.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:39:07.309: INFO: namespace: e2e-tests-pod-network-test-7bfxp, resource: bindings, ignored listing per whitelist
Jan  2 16:39:07.376: INFO: namespace e2e-tests-pod-network-test-7bfxp deletion completed in 24.275953477s

• [SLOW TEST:61.625 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:39:07.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 16:39:18.396: INFO: Successfully updated pod "annotationupdate653f4464-2d7e-11ea-b611-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:39:20.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lr9cj" for this suite.
Jan  2 16:39:36.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:39:36.955: INFO: namespace: e2e-tests-projected-lr9cj, resource: bindings, ignored listing per whitelist
Jan  2 16:39:36.998: INFO: namespace e2e-tests-projected-lr9cj deletion completed in 16.345169905s

• [SLOW TEST:29.621 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:39:36.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:39:49.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-27zws" for this suite.
Jan  2 16:40:35.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:40:35.751: INFO: namespace: e2e-tests-kubelet-test-27zws, resource: bindings, ignored listing per whitelist
Jan  2 16:40:35.790: INFO: namespace e2e-tests-kubelet-test-27zws deletion completed in 46.335924584s

• [SLOW TEST:58.792 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:40:35.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-99f0b59d-2d7e-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:40:50.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zrddl" for this suite.
Jan  2 16:41:14.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:41:14.476: INFO: namespace: e2e-tests-configmap-zrddl, resource: bindings, ignored listing per whitelist
Jan  2 16:41:14.632: INFO: namespace e2e-tests-configmap-zrddl deletion completed in 24.337155376s

• [SLOW TEST:38.841 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:41:14.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 16:41:15.107: INFO: Waiting up to 5m0s for pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-s8m74" to be "success or failure"
Jan  2 16:41:15.138: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.146122ms
Jan  2 16:41:17.165: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057990223s
Jan  2 16:41:19.188: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081346573s
Jan  2 16:41:21.280: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172800346s
Jan  2 16:41:23.292: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185177306s
Jan  2 16:41:25.395: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.288278735s
STEP: Saw pod success
Jan  2 16:41:25.395: INFO: Pod "pod-b13a0599-2d7e-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:41:25.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b13a0599-2d7e-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 16:41:25.567: INFO: Waiting for pod pod-b13a0599-2d7e-11ea-b611-0242ac110005 to disappear
Jan  2 16:41:25.582: INFO: Pod pod-b13a0599-2d7e-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:41:25.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s8m74" for this suite.
Jan  2 16:41:31.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:41:31.923: INFO: namespace: e2e-tests-emptydir-s8m74, resource: bindings, ignored listing per whitelist
Jan  2 16:41:31.948: INFO: namespace e2e-tests-emptydir-s8m74 deletion completed in 6.351041863s

• [SLOW TEST:17.315 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:41:31.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:42:33.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-j9t7g" for this suite.
Jan  2 16:42:39.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:42:39.120: INFO: namespace: e2e-tests-container-runtime-j9t7g, resource: bindings, ignored listing per whitelist
Jan  2 16:42:39.203: INFO: namespace e2e-tests-container-runtime-j9t7g deletion completed in 6.145420873s

• [SLOW TEST:67.254 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:42:39.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  2 16:42:39.532: INFO: Waiting up to 5m0s for pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005" in namespace "e2e-tests-containers-9m6dn" to be "success or failure"
Jan  2 16:42:39.540: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.790767ms
Jan  2 16:42:41.558: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025039256s
Jan  2 16:42:43.600: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067698625s
Jan  2 16:42:46.353: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.82066017s
Jan  2 16:42:48.415: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.882416782s
Jan  2 16:42:50.433: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900346993s
Jan  2 16:42:52.461: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.928501435s
STEP: Saw pod success
Jan  2 16:42:52.461: INFO: Pod "client-containers-e3819764-2d7e-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:42:52.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e3819764-2d7e-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 16:42:52.801: INFO: Waiting for pod client-containers-e3819764-2d7e-11ea-b611-0242ac110005 to disappear
Jan  2 16:42:52.817: INFO: Pod client-containers-e3819764-2d7e-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:42:52.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9m6dn" for this suite.
Jan  2 16:42:58.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:42:59.041: INFO: namespace: e2e-tests-containers-9m6dn, resource: bindings, ignored listing per whitelist
Jan  2 16:42:59.049: INFO: namespace e2e-tests-containers-9m6dn deletion completed in 6.221755414s

• [SLOW TEST:19.846 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:42:59.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 16:43:19.523: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:19.578: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:21.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:21.764: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:23.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:23.768: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:25.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:25.592: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:27.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:27.597: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:29.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:29.597: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:31.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:31.595: INFO: Pod pod-with-poststart-http-hook still exists
Jan  2 16:43:33.579: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  2 16:43:33.616: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:43:33.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ts8dj" for this suite.
Jan  2 16:43:57.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:43:57.746: INFO: namespace: e2e-tests-container-lifecycle-hook-ts8dj, resource: bindings, ignored listing per whitelist
Jan  2 16:43:57.845: INFO: namespace e2e-tests-container-lifecycle-hook-ts8dj deletion completed in 24.212985395s

• [SLOW TEST:58.795 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:43:57.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  2 16:43:59.792: INFO: Pod name wrapped-volume-race-1355cb0d-2d7f-11ea-b611-0242ac110005: Found 0 pods out of 5
Jan  2 16:44:04.855: INFO: Pod name wrapped-volume-race-1355cb0d-2d7f-11ea-b611-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1355cb0d-2d7f-11ea-b611-0242ac110005 in namespace e2e-tests-emptydir-wrapper-nplt4, will wait for the garbage collector to delete the pods
Jan  2 16:46:29.103: INFO: Deleting ReplicationController wrapped-volume-race-1355cb0d-2d7f-11ea-b611-0242ac110005 took: 37.929391ms
Jan  2 16:46:29.604: INFO: Terminating ReplicationController wrapped-volume-race-1355cb0d-2d7f-11ea-b611-0242ac110005 pods took: 501.088475ms
STEP: Creating RC which spawns configmap-volume pods
Jan  2 16:47:23.427: INFO: Pod name wrapped-volume-race-8cb5feb6-2d7f-11ea-b611-0242ac110005: Found 0 pods out of 5
Jan  2 16:47:28.470: INFO: Pod name wrapped-volume-race-8cb5feb6-2d7f-11ea-b611-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8cb5feb6-2d7f-11ea-b611-0242ac110005 in namespace e2e-tests-emptydir-wrapper-nplt4, will wait for the garbage collector to delete the pods
Jan  2 16:49:32.911: INFO: Deleting ReplicationController wrapped-volume-race-8cb5feb6-2d7f-11ea-b611-0242ac110005 took: 54.362245ms
Jan  2 16:49:33.313: INFO: Terminating ReplicationController wrapped-volume-race-8cb5feb6-2d7f-11ea-b611-0242ac110005 pods took: 401.7413ms
STEP: Creating RC which spawns configmap-volume pods
Jan  2 16:50:27.048: INFO: Pod name wrapped-volume-race-fa26970d-2d7f-11ea-b611-0242ac110005: Found 0 pods out of 5
Jan  2 16:50:32.073: INFO: Pod name wrapped-volume-race-fa26970d-2d7f-11ea-b611-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fa26970d-2d7f-11ea-b611-0242ac110005 in namespace e2e-tests-emptydir-wrapper-nplt4, will wait for the garbage collector to delete the pods
Jan  2 16:52:56.231: INFO: Deleting ReplicationController wrapped-volume-race-fa26970d-2d7f-11ea-b611-0242ac110005 took: 18.300584ms
Jan  2 16:52:56.632: INFO: Terminating ReplicationController wrapped-volume-race-fa26970d-2d7f-11ea-b611-0242ac110005 pods took: 401.312708ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:53:45.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-nplt4" for this suite.
Jan  2 16:53:53.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:53:54.104: INFO: namespace: e2e-tests-emptydir-wrapper-nplt4, resource: bindings, ignored listing per whitelist
Jan  2 16:53:54.108: INFO: namespace e2e-tests-emptydir-wrapper-nplt4 deletion completed in 8.344824004s

• [SLOW TEST:596.262 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:53:54.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-75ba7160-2d80-11ea-b611-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-75ba71ba-2d80-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-75ba7160-2d80-11ea-b611-0242ac110005
STEP: Updating configmap cm-test-opt-upd-75ba71ba-2d80-11ea-b611-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-75ba71d6-2d80-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:55:23.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h4csd" for this suite.
Jan  2 16:55:47.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:55:47.447: INFO: namespace: e2e-tests-configmap-h4csd, resource: bindings, ignored listing per whitelist
Jan  2 16:55:47.574: INFO: namespace e2e-tests-configmap-h4csd deletion completed in 24.291353652s

• [SLOW TEST:113.465 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:55:47.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 16:55:47.868: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 16:55:48.035: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 16:55:48.061: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 16:55:48.120: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 16:55:48.121: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 16:55:48.121: INFO: 	Container weave ready: true, restart count 0
Jan  2 16:55:48.121: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 16:55:48.121: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 16:55:48.121: INFO: 	Container coredns ready: true, restart count 0
Jan  2 16:55:48.121: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 16:55:48.121: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 16:55:48.121: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 16:55:48.121: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 16:55:48.121: INFO: 	Container coredns ready: true, restart count 0
Jan  2 16:55:48.121: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 16:55:48.121: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bfb958ee-2d80-11ea-b611-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-bfb958ee-2d80-11ea-b611-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bfb958ee-2d80-11ea-b611-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:56:10.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8nvf6" for this suite.
Jan  2 16:56:35.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:56:35.146: INFO: namespace: e2e-tests-sched-pred-8nvf6, resource: bindings, ignored listing per whitelist
Jan  2 16:56:35.159: INFO: namespace e2e-tests-sched-pred-8nvf6 deletion completed in 24.163616928s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.585 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:56:35.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-4rm8v/configmap-test-d5b59497-2d80-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 16:56:35.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-4rm8v" to be "success or failure"
Jan  2 16:56:35.490: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 173.405864ms
Jan  2 16:56:37.504: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18823359s
Jan  2 16:56:39.530: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213569679s
Jan  2 16:56:41.710: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393322741s
Jan  2 16:56:43.731: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415102856s
Jan  2 16:56:45.747: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.430985393s
STEP: Saw pod success
Jan  2 16:56:45.747: INFO: Pod "pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:56:45.754: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 16:56:45.894: INFO: Waiting for pod pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005 to disappear
Jan  2 16:56:45.917: INFO: Pod pod-configmaps-d5b906f5-2d80-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:56:45.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4rm8v" for this suite.
Jan  2 16:56:51.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:56:52.095: INFO: namespace: e2e-tests-configmap-4rm8v, resource: bindings, ignored listing per whitelist
Jan  2 16:56:52.234: INFO: namespace e2e-tests-configmap-4rm8v deletion completed in 6.310328954s

• [SLOW TEST:17.075 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:56:52.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-e0094d3d-2d80-11ea-b611-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-e0094d99-2d80-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e0094d3d-2d80-11ea-b611-0242ac110005
STEP: Updating configmap cm-test-opt-upd-e0094d99-2d80-11ea-b611-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-e0094dbe-2d80-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:57:13.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f9m5g" for this suite.
Jan  2 16:57:39.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:57:39.349: INFO: namespace: e2e-tests-projected-f9m5g, resource: bindings, ignored listing per whitelist
Jan  2 16:57:39.434: INFO: namespace e2e-tests-projected-f9m5g deletion completed in 26.220874306s

• [SLOW TEST:47.201 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:57:39.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  2 16:57:39.624: INFO: Waiting up to 5m0s for pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005" in namespace "e2e-tests-containers-9rgkn" to be "success or failure"
Jan  2 16:57:39.709: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.117011ms
Jan  2 16:57:41.840: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216053022s
Jan  2 16:57:43.863: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239685251s
Jan  2 16:57:46.705: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081340694s
Jan  2 16:57:48.719: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095222899s
Jan  2 16:57:50.852: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.228472704s
STEP: Saw pod success
Jan  2 16:57:50.853: INFO: Pod "client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:57:50.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 16:57:51.031: INFO: Waiting for pod client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005 to disappear
Jan  2 16:57:51.044: INFO: Pod client-containers-fc0e47bc-2d80-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:57:51.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9rgkn" for this suite.
Jan  2 16:57:57.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:57:57.240: INFO: namespace: e2e-tests-containers-9rgkn, resource: bindings, ignored listing per whitelist
Jan  2 16:57:57.365: INFO: namespace e2e-tests-containers-9rgkn deletion completed in 6.310013802s

• [SLOW TEST:17.929 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:57:57.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  2 16:57:57.606: INFO: Waiting up to 5m0s for pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-containers-zcmhb" to be "success or failure"
Jan  2 16:57:57.755: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 147.974632ms
Jan  2 16:57:59.775: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168226179s
Jan  2 16:58:01.783: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176459143s
Jan  2 16:58:04.128: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521526194s
Jan  2 16:58:06.159: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552122406s
Jan  2 16:58:08.170: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563636203s
STEP: Saw pod success
Jan  2 16:58:08.170: INFO: Pod "client-containers-06c57c01-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:58:08.174: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-06c57c01-2d81-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 16:58:09.110: INFO: Waiting for pod client-containers-06c57c01-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 16:58:09.147: INFO: Pod client-containers-06c57c01-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:58:09.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zcmhb" for this suite.
Jan  2 16:58:17.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:58:17.468: INFO: namespace: e2e-tests-containers-zcmhb, resource: bindings, ignored listing per whitelist
Jan  2 16:58:17.537: INFO: namespace e2e-tests-containers-zcmhb deletion completed in 8.381016973s

• [SLOW TEST:20.172 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:58:17.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 16:58:17.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-56l8w'
Jan  2 16:58:20.012: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 16:58:20.013: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  2 16:58:22.083: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-fpw2l]
Jan  2 16:58:22.083: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-fpw2l" in namespace "e2e-tests-kubectl-56l8w" to be "running and ready"
Jan  2 16:58:22.088: INFO: Pod "e2e-test-nginx-rc-fpw2l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.287047ms
Jan  2 16:58:24.102: INFO: Pod "e2e-test-nginx-rc-fpw2l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019323906s
Jan  2 16:58:26.165: INFO: Pod "e2e-test-nginx-rc-fpw2l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081919951s
Jan  2 16:58:28.186: INFO: Pod "e2e-test-nginx-rc-fpw2l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102827589s
Jan  2 16:58:30.212: INFO: Pod "e2e-test-nginx-rc-fpw2l": Phase="Running", Reason="", readiness=true. Elapsed: 8.128862201s
Jan  2 16:58:30.212: INFO: Pod "e2e-test-nginx-rc-fpw2l" satisfied condition "running and ready"
Jan  2 16:58:30.212: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-fpw2l]
Jan  2 16:58:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56l8w'
Jan  2 16:58:30.495: INFO: stderr: ""
Jan  2 16:58:30.496: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  2 16:58:30.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56l8w'
Jan  2 16:58:30.725: INFO: stderr: ""
Jan  2 16:58:30.725: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:58:30.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-56l8w" for this suite.
Jan  2 16:58:54.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:58:54.947: INFO: namespace: e2e-tests-kubectl-56l8w, resource: bindings, ignored listing per whitelist
Jan  2 16:58:55.000: INFO: namespace e2e-tests-kubectl-56l8w deletion completed in 24.262962865s

• [SLOW TEST:37.463 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:58:55.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-291c91fe-2d81-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 16:58:55.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-jqh4g" to be "success or failure"
Jan  2 16:58:55.246: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.440609ms
Jan  2 16:58:57.266: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031571936s
Jan  2 16:58:59.344: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108714797s
Jan  2 16:59:01.523: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287705197s
Jan  2 16:59:03.775: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54060318s
Jan  2 16:59:05.790: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555405761s
STEP: Saw pod success
Jan  2 16:59:05.791: INFO: Pod "pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:59:05.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 16:59:06.562: INFO: Waiting for pod pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 16:59:06.627: INFO: Pod pod-configmaps-291d8665-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:59:06.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jqh4g" for this suite.
Jan  2 16:59:12.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:59:13.121: INFO: namespace: e2e-tests-configmap-jqh4g, resource: bindings, ignored listing per whitelist
Jan  2 16:59:13.152: INFO: namespace e2e-tests-configmap-jqh4g deletion completed in 6.298576705s

• [SLOW TEST:18.152 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:59:13.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-33f40e24-2d81-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 16:59:13.415: INFO: Waiting up to 5m0s for pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-8shfs" to be "success or failure"
Jan  2 16:59:13.430: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.216462ms
Jan  2 16:59:15.891: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475353095s
Jan  2 16:59:17.913: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496922019s
Jan  2 16:59:20.158: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742459037s
Jan  2 16:59:22.200: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.784432281s
Jan  2 16:59:24.223: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.80759772s
STEP: Saw pod success
Jan  2 16:59:24.223: INFO: Pod "pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 16:59:24.229: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 16:59:24.337: INFO: Waiting for pod pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 16:59:24.348: INFO: Pod pod-secrets-33f557b1-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 16:59:24.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8shfs" for this suite.
Jan  2 16:59:30.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 16:59:30.616: INFO: namespace: e2e-tests-secrets-8shfs, resource: bindings, ignored listing per whitelist
Jan  2 16:59:30.712: INFO: namespace e2e-tests-secrets-8shfs deletion completed in 6.216528459s

• [SLOW TEST:17.559 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 16:59:30.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-gvstl
I0102 16:59:30.920041       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-gvstl, replica count: 1
I0102 16:59:31.971244       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:32.971923       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:33.972707       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:34.973288       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:35.973765       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:36.974522       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:37.975108       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:38.975698       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 16:59:39.976244       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 16:59:40.223: INFO: Created: latency-svc-j5h9v
Jan  2 16:59:40.293: INFO: Got endpoints: latency-svc-j5h9v [216.565799ms]
Jan  2 16:59:40.475: INFO: Created: latency-svc-55ttk
Jan  2 16:59:40.595: INFO: Got endpoints: latency-svc-55ttk [301.250671ms]
Jan  2 16:59:40.614: INFO: Created: latency-svc-c5h6z
Jan  2 16:59:40.643: INFO: Got endpoints: latency-svc-c5h6z [347.341538ms]
Jan  2 16:59:40.777: INFO: Created: latency-svc-27x56
Jan  2 16:59:40.796: INFO: Got endpoints: latency-svc-27x56 [501.700315ms]
Jan  2 16:59:40.866: INFO: Created: latency-svc-6kbpn
Jan  2 16:59:40.986: INFO: Got endpoints: latency-svc-6kbpn [691.775141ms]
Jan  2 16:59:41.008: INFO: Created: latency-svc-5b2dl
Jan  2 16:59:41.025: INFO: Got endpoints: latency-svc-5b2dl [730.155239ms]
Jan  2 16:59:41.076: INFO: Created: latency-svc-tvp9m
Jan  2 16:59:41.219: INFO: Got endpoints: latency-svc-tvp9m [923.279289ms]
Jan  2 16:59:41.244: INFO: Created: latency-svc-xttq4
Jan  2 16:59:41.262: INFO: Got endpoints: latency-svc-xttq4 [968.088234ms]
Jan  2 16:59:41.496: INFO: Created: latency-svc-qtk5j
Jan  2 16:59:41.514: INFO: Got endpoints: latency-svc-qtk5j [1.218985671s]
Jan  2 16:59:41.840: INFO: Created: latency-svc-vx2jh
Jan  2 16:59:42.204: INFO: Got endpoints: latency-svc-vx2jh [1.908524424s]
Jan  2 16:59:42.285: INFO: Created: latency-svc-v6pfz
Jan  2 16:59:42.470: INFO: Got endpoints: latency-svc-v6pfz [2.174104606s]
Jan  2 16:59:42.502: INFO: Created: latency-svc-wfpnr
Jan  2 16:59:42.578: INFO: Got endpoints: latency-svc-wfpnr [2.28201191s]
Jan  2 16:59:42.815: INFO: Created: latency-svc-c22qt
Jan  2 16:59:43.018: INFO: Got endpoints: latency-svc-c22qt [2.721576901s]
Jan  2 16:59:43.032: INFO: Created: latency-svc-7djkc
Jan  2 16:59:43.055: INFO: Got endpoints: latency-svc-7djkc [2.759053907s]
Jan  2 16:59:43.215: INFO: Created: latency-svc-dqvs8
Jan  2 16:59:43.230: INFO: Got endpoints: latency-svc-dqvs8 [2.934253126s]
Jan  2 16:59:43.308: INFO: Created: latency-svc-rhg7m
Jan  2 16:59:43.477: INFO: Got endpoints: latency-svc-rhg7m [3.18275626s]
Jan  2 16:59:43.708: INFO: Created: latency-svc-kkgkl
Jan  2 16:59:43.731: INFO: Got endpoints: latency-svc-kkgkl [3.135161795s]
Jan  2 16:59:43.884: INFO: Created: latency-svc-zkb2g
Jan  2 16:59:43.910: INFO: Got endpoints: latency-svc-zkb2g [3.266342982s]
Jan  2 16:59:43.960: INFO: Created: latency-svc-rggwq
Jan  2 16:59:43.977: INFO: Got endpoints: latency-svc-rggwq [3.180799646s]
Jan  2 16:59:44.217: INFO: Created: latency-svc-zrbrx
Jan  2 16:59:44.228: INFO: Got endpoints: latency-svc-zrbrx [3.24151049s]
Jan  2 16:59:44.362: INFO: Created: latency-svc-qzctb
Jan  2 16:59:44.588: INFO: Got endpoints: latency-svc-qzctb [3.562565158s]
Jan  2 16:59:44.613: INFO: Created: latency-svc-hx2sc
Jan  2 16:59:44.627: INFO: Got endpoints: latency-svc-hx2sc [3.40773691s]
Jan  2 16:59:44.778: INFO: Created: latency-svc-d8l2q
Jan  2 16:59:44.795: INFO: Got endpoints: latency-svc-d8l2q [3.532657051s]
Jan  2 16:59:44.968: INFO: Created: latency-svc-v2fb6
Jan  2 16:59:44.997: INFO: Got endpoints: latency-svc-v2fb6 [3.482399701s]
Jan  2 16:59:45.057: INFO: Created: latency-svc-vl5jr
Jan  2 16:59:45.201: INFO: Got endpoints: latency-svc-vl5jr [2.996311846s]
Jan  2 16:59:45.220: INFO: Created: latency-svc-mtmkw
Jan  2 16:59:45.476: INFO: Got endpoints: latency-svc-mtmkw [3.005709129s]
Jan  2 16:59:45.517: INFO: Created: latency-svc-tgv9n
Jan  2 16:59:45.548: INFO: Got endpoints: latency-svc-tgv9n [2.969921284s]
Jan  2 16:59:45.707: INFO: Created: latency-svc-9ktqm
Jan  2 16:59:45.732: INFO: Got endpoints: latency-svc-9ktqm [2.713877298s]
Jan  2 16:59:45.770: INFO: Created: latency-svc-nx5vp
Jan  2 16:59:45.893: INFO: Got endpoints: latency-svc-nx5vp [2.837887222s]
Jan  2 16:59:45.914: INFO: Created: latency-svc-v86jt
Jan  2 16:59:45.964: INFO: Got endpoints: latency-svc-v86jt [2.733457383s]
Jan  2 16:59:46.125: INFO: Created: latency-svc-ppbn5
Jan  2 16:59:46.144: INFO: Got endpoints: latency-svc-ppbn5 [2.665487268s]
Jan  2 16:59:46.330: INFO: Created: latency-svc-nbjxv
Jan  2 16:59:46.360: INFO: Got endpoints: latency-svc-nbjxv [2.628449136s]
Jan  2 16:59:46.523: INFO: Created: latency-svc-mmkgw
Jan  2 16:59:46.630: INFO: Created: latency-svc-cjdxn
Jan  2 16:59:46.757: INFO: Got endpoints: latency-svc-mmkgw [2.846795129s]
Jan  2 16:59:46.782: INFO: Got endpoints: latency-svc-cjdxn [2.804375105s]
Jan  2 16:59:46.813: INFO: Created: latency-svc-r6g58
Jan  2 16:59:46.826: INFO: Got endpoints: latency-svc-r6g58 [2.597157852s]
Jan  2 16:59:46.997: INFO: Created: latency-svc-xcbxj
Jan  2 16:59:47.058: INFO: Got endpoints: latency-svc-xcbxj [2.470226459s]
Jan  2 16:59:47.218: INFO: Created: latency-svc-htkq7
Jan  2 16:59:47.233: INFO: Got endpoints: latency-svc-htkq7 [2.605182791s]
Jan  2 16:59:47.502: INFO: Created: latency-svc-4vtsk
Jan  2 16:59:47.507: INFO: Got endpoints: latency-svc-4vtsk [2.711548649s]
Jan  2 16:59:47.707: INFO: Created: latency-svc-4n5rd
Jan  2 16:59:47.732: INFO: Got endpoints: latency-svc-4n5rd [2.734786987s]
Jan  2 16:59:47.870: INFO: Created: latency-svc-csxrq
Jan  2 16:59:47.891: INFO: Got endpoints: latency-svc-csxrq [2.689617073s]
Jan  2 16:59:47.942: INFO: Created: latency-svc-lswd7
Jan  2 16:59:48.195: INFO: Got endpoints: latency-svc-lswd7 [2.718742396s]
Jan  2 16:59:48.422: INFO: Created: latency-svc-gmxb5
Jan  2 16:59:48.453: INFO: Got endpoints: latency-svc-gmxb5 [2.904446903s]
Jan  2 16:59:48.490: INFO: Created: latency-svc-5rqdt
Jan  2 16:59:48.653: INFO: Created: latency-svc-7v57p
Jan  2 16:59:48.681: INFO: Got endpoints: latency-svc-5rqdt [2.948353946s]
Jan  2 16:59:48.700: INFO: Got endpoints: latency-svc-7v57p [2.806283553s]
Jan  2 16:59:48.856: INFO: Created: latency-svc-m96b7
Jan  2 16:59:48.863: INFO: Got endpoints: latency-svc-m96b7 [2.899277697s]
Jan  2 16:59:48.917: INFO: Created: latency-svc-gwzhb
Jan  2 16:59:48.995: INFO: Got endpoints: latency-svc-gwzhb [2.85087286s]
Jan  2 16:59:49.059: INFO: Created: latency-svc-jnc56
Jan  2 16:59:49.082: INFO: Got endpoints: latency-svc-jnc56 [2.721892592s]
Jan  2 16:59:49.184: INFO: Created: latency-svc-kg6vp
Jan  2 16:59:49.214: INFO: Got endpoints: latency-svc-kg6vp [2.456697105s]
Jan  2 16:59:49.260: INFO: Created: latency-svc-6nzhl
Jan  2 16:59:49.284: INFO: Got endpoints: latency-svc-6nzhl [2.501776093s]
Jan  2 16:59:49.488: INFO: Created: latency-svc-pmvqx
Jan  2 16:59:49.526: INFO: Got endpoints: latency-svc-pmvqx [2.699911322s]
Jan  2 16:59:49.783: INFO: Created: latency-svc-4wdxp
Jan  2 16:59:49.783: INFO: Got endpoints: latency-svc-4wdxp [2.724168872s]
Jan  2 16:59:49.953: INFO: Created: latency-svc-nqplg
Jan  2 16:59:49.960: INFO: Got endpoints: latency-svc-nqplg [2.727345012s]
Jan  2 16:59:50.051: INFO: Created: latency-svc-7tn62
Jan  2 16:59:50.117: INFO: Got endpoints: latency-svc-7tn62 [2.610463373s]
Jan  2 16:59:50.190: INFO: Created: latency-svc-zlqks
Jan  2 16:59:50.207: INFO: Got endpoints: latency-svc-zlqks [2.475545964s]
Jan  2 16:59:50.353: INFO: Created: latency-svc-5b792
Jan  2 16:59:50.385: INFO: Created: latency-svc-srhv2
Jan  2 16:59:50.551: INFO: Got endpoints: latency-svc-5b792 [2.659070395s]
Jan  2 16:59:50.586: INFO: Got endpoints: latency-svc-srhv2 [2.390610735s]
Jan  2 16:59:50.730: INFO: Created: latency-svc-pxs4d
Jan  2 16:59:50.761: INFO: Got endpoints: latency-svc-pxs4d [2.30697432s]
Jan  2 16:59:50.903: INFO: Created: latency-svc-gpjdf
Jan  2 16:59:50.920: INFO: Got endpoints: latency-svc-gpjdf [2.238773153s]
Jan  2 16:59:51.069: INFO: Created: latency-svc-xmdnw
Jan  2 16:59:51.107: INFO: Got endpoints: latency-svc-xmdnw [2.406663491s]
Jan  2 16:59:51.217: INFO: Created: latency-svc-hc74k
Jan  2 16:59:51.310: INFO: Got endpoints: latency-svc-hc74k [2.446524424s]
Jan  2 16:59:51.332: INFO: Created: latency-svc-r75s8
Jan  2 16:59:51.476: INFO: Got endpoints: latency-svc-r75s8 [2.480924851s]
Jan  2 16:59:51.548: INFO: Created: latency-svc-8vs6k
Jan  2 16:59:51.556: INFO: Got endpoints: latency-svc-8vs6k [2.473697069s]
Jan  2 16:59:51.732: INFO: Created: latency-svc-tjrbx
Jan  2 16:59:51.740: INFO: Got endpoints: latency-svc-tjrbx [2.526223182s]
Jan  2 16:59:51.891: INFO: Created: latency-svc-9cl2x
Jan  2 16:59:51.916: INFO: Got endpoints: latency-svc-9cl2x [2.631495598s]
Jan  2 16:59:52.052: INFO: Created: latency-svc-5hzst
Jan  2 16:59:52.256: INFO: Got endpoints: latency-svc-5hzst [2.729722184s]
Jan  2 16:59:52.445: INFO: Created: latency-svc-8nm4w
Jan  2 16:59:52.445: INFO: Got endpoints: latency-svc-8nm4w [2.661369187s]
Jan  2 16:59:52.742: INFO: Created: latency-svc-5b82z
Jan  2 16:59:52.763: INFO: Got endpoints: latency-svc-5b82z [2.802765368s]
Jan  2 16:59:53.159: INFO: Created: latency-svc-h56zj
Jan  2 16:59:53.172: INFO: Got endpoints: latency-svc-h56zj [3.053890309s]
Jan  2 16:59:53.294: INFO: Created: latency-svc-vgbdh
Jan  2 16:59:53.346: INFO: Got endpoints: latency-svc-vgbdh [3.138648638s]
Jan  2 16:59:53.397: INFO: Created: latency-svc-c9kd5
Jan  2 16:59:53.574: INFO: Got endpoints: latency-svc-c9kd5 [3.022278811s]
Jan  2 16:59:53.818: INFO: Created: latency-svc-l49cn
Jan  2 16:59:53.837: INFO: Got endpoints: latency-svc-l49cn [3.250648578s]
Jan  2 16:59:53.972: INFO: Created: latency-svc-pbxf5
Jan  2 16:59:53.991: INFO: Got endpoints: latency-svc-pbxf5 [3.229214912s]
Jan  2 16:59:54.167: INFO: Created: latency-svc-7krf4
Jan  2 16:59:54.167: INFO: Got endpoints: latency-svc-7krf4 [3.247417048s]
Jan  2 16:59:54.242: INFO: Created: latency-svc-bqlm7
Jan  2 16:59:54.329: INFO: Got endpoints: latency-svc-bqlm7 [3.222119346s]
Jan  2 16:59:54.372: INFO: Created: latency-svc-qx5ds
Jan  2 16:59:54.531: INFO: Got endpoints: latency-svc-qx5ds [3.220541726s]
Jan  2 16:59:54.560: INFO: Created: latency-svc-w6x4z
Jan  2 16:59:54.576: INFO: Got endpoints: latency-svc-w6x4z [3.099727862s]
Jan  2 16:59:54.805: INFO: Created: latency-svc-995pq
Jan  2 16:59:54.830: INFO: Got endpoints: latency-svc-995pq [3.274469853s]
Jan  2 16:59:54.941: INFO: Created: latency-svc-kx8jn
Jan  2 16:59:54.970: INFO: Got endpoints: latency-svc-kx8jn [3.230149681s]
Jan  2 16:59:55.037: INFO: Created: latency-svc-svmgf
Jan  2 16:59:55.127: INFO: Got endpoints: latency-svc-svmgf [3.210285489s]
Jan  2 16:59:55.167: INFO: Created: latency-svc-wkfjf
Jan  2 16:59:55.193: INFO: Got endpoints: latency-svc-wkfjf [2.936323257s]
Jan  2 16:59:55.312: INFO: Created: latency-svc-x6qps
Jan  2 16:59:55.344: INFO: Got endpoints: latency-svc-x6qps [2.899490241s]
Jan  2 16:59:55.561: INFO: Created: latency-svc-x49w6
Jan  2 16:59:55.580: INFO: Got endpoints: latency-svc-x49w6 [2.815982992s]
Jan  2 16:59:55.772: INFO: Created: latency-svc-cb8hg
Jan  2 16:59:55.773: INFO: Got endpoints: latency-svc-cb8hg [2.600688547s]
Jan  2 16:59:55.850: INFO: Created: latency-svc-ms2vm
Jan  2 16:59:55.935: INFO: Got endpoints: latency-svc-ms2vm [2.588718233s]
Jan  2 16:59:56.002: INFO: Created: latency-svc-c4qhc
Jan  2 16:59:56.011: INFO: Got endpoints: latency-svc-c4qhc [2.437009567s]
Jan  2 16:59:56.217: INFO: Created: latency-svc-5f4mb
Jan  2 16:59:56.239: INFO: Got endpoints: latency-svc-5f4mb [2.401418521s]
Jan  2 16:59:56.428: INFO: Created: latency-svc-s2hqb
Jan  2 16:59:56.489: INFO: Got endpoints: latency-svc-s2hqb [2.498218829s]
Jan  2 16:59:56.754: INFO: Created: latency-svc-t64bd
Jan  2 16:59:56.819: INFO: Got endpoints: latency-svc-t64bd [2.651207907s]
Jan  2 16:59:57.051: INFO: Created: latency-svc-pb5xk
Jan  2 16:59:57.079: INFO: Got endpoints: latency-svc-pb5xk [2.749129304s]
Jan  2 16:59:57.303: INFO: Created: latency-svc-sk78t
Jan  2 16:59:57.338: INFO: Got endpoints: latency-svc-sk78t [2.805861413s]
Jan  2 16:59:58.063: INFO: Created: latency-svc-g2bjv
Jan  2 16:59:58.064: INFO: Got endpoints: latency-svc-g2bjv [3.486983874s]
Jan  2 16:59:58.260: INFO: Created: latency-svc-tls68
Jan  2 16:59:58.287: INFO: Got endpoints: latency-svc-tls68 [3.456060985s]
Jan  2 16:59:58.552: INFO: Created: latency-svc-np5rd
Jan  2 16:59:58.552: INFO: Got endpoints: latency-svc-np5rd [3.581141554s]
Jan  2 16:59:58.799: INFO: Created: latency-svc-hkpx8
Jan  2 16:59:58.994: INFO: Got endpoints: latency-svc-hkpx8 [3.86715252s]
Jan  2 16:59:59.231: INFO: Created: latency-svc-j86kn
Jan  2 16:59:59.321: INFO: Got endpoints: latency-svc-j86kn [4.128211654s]
Jan  2 16:59:59.529: INFO: Created: latency-svc-7vpx8
Jan  2 16:59:59.690: INFO: Got endpoints: latency-svc-7vpx8 [4.345695051s]
Jan  2 16:59:59.959: INFO: Created: latency-svc-t2vzd
Jan  2 16:59:59.984: INFO: Got endpoints: latency-svc-t2vzd [4.404091045s]
Jan  2 17:00:00.125: INFO: Created: latency-svc-ss449
Jan  2 17:00:00.146: INFO: Got endpoints: latency-svc-ss449 [4.373495705s]
Jan  2 17:00:00.200: INFO: Created: latency-svc-lwnt5
Jan  2 17:00:00.212: INFO: Got endpoints: latency-svc-lwnt5 [4.276528257s]
Jan  2 17:00:00.320: INFO: Created: latency-svc-5r5k2
Jan  2 17:00:00.344: INFO: Got endpoints: latency-svc-5r5k2 [4.332308622s]
Jan  2 17:00:00.397: INFO: Created: latency-svc-5z52t
Jan  2 17:00:00.551: INFO: Created: latency-svc-c8vj2
Jan  2 17:00:00.736: INFO: Created: latency-svc-fxdwt
Jan  2 17:00:00.737: INFO: Got endpoints: latency-svc-5z52t [4.497619194s]
Jan  2 17:00:00.739: INFO: Got endpoints: latency-svc-fxdwt [3.920290044s]
Jan  2 17:00:00.751: INFO: Got endpoints: latency-svc-c8vj2 [4.261717881s]
Jan  2 17:00:00.812: INFO: Created: latency-svc-wgw67
Jan  2 17:00:00.951: INFO: Created: latency-svc-nlbxg
Jan  2 17:00:00.953: INFO: Got endpoints: latency-svc-wgw67 [3.873744149s]
Jan  2 17:00:00.967: INFO: Got endpoints: latency-svc-nlbxg [3.62867868s]
Jan  2 17:00:01.091: INFO: Created: latency-svc-z46b8
Jan  2 17:00:01.121: INFO: Got endpoints: latency-svc-z46b8 [3.057542889s]
Jan  2 17:00:01.175: INFO: Created: latency-svc-z7j6d
Jan  2 17:00:01.267: INFO: Got endpoints: latency-svc-z7j6d [2.979468425s]
Jan  2 17:00:01.295: INFO: Created: latency-svc-k8lft
Jan  2 17:00:01.309: INFO: Got endpoints: latency-svc-k8lft [2.756550466s]
Jan  2 17:00:01.350: INFO: Created: latency-svc-8pb95
Jan  2 17:00:01.451: INFO: Got endpoints: latency-svc-8pb95 [2.45680881s]
Jan  2 17:00:01.494: INFO: Created: latency-svc-n9plh
Jan  2 17:00:01.530: INFO: Got endpoints: latency-svc-n9plh [2.208487338s]
Jan  2 17:00:01.648: INFO: Created: latency-svc-gzhkv
Jan  2 17:00:01.674: INFO: Got endpoints: latency-svc-gzhkv [1.983515724s]
Jan  2 17:00:01.728: INFO: Created: latency-svc-4pcd7
Jan  2 17:00:01.832: INFO: Got endpoints: latency-svc-4pcd7 [1.847593112s]
Jan  2 17:00:01.855: INFO: Created: latency-svc-k9nsz
Jan  2 17:00:01.936: INFO: Created: latency-svc-xb74m
Jan  2 17:00:01.936: INFO: Got endpoints: latency-svc-k9nsz [261.759489ms]
Jan  2 17:00:02.020: INFO: Got endpoints: latency-svc-xb74m [1.87386099s]
Jan  2 17:00:02.112: INFO: Created: latency-svc-dn5bg
Jan  2 17:00:02.209: INFO: Got endpoints: latency-svc-dn5bg [1.996874306s]
Jan  2 17:00:02.256: INFO: Created: latency-svc-crmhg
Jan  2 17:00:02.367: INFO: Got endpoints: latency-svc-crmhg [2.023236011s]
Jan  2 17:00:02.423: INFO: Created: latency-svc-dzl4v
Jan  2 17:00:02.445: INFO: Got endpoints: latency-svc-dzl4v [1.70781592s]
Jan  2 17:00:02.614: INFO: Created: latency-svc-nt8hc
Jan  2 17:00:02.635: INFO: Got endpoints: latency-svc-nt8hc [1.895913879s]
Jan  2 17:00:02.837: INFO: Created: latency-svc-95mpq
Jan  2 17:00:02.851: INFO: Got endpoints: latency-svc-95mpq [2.099268342s]
Jan  2 17:00:02.920: INFO: Created: latency-svc-2475x
Jan  2 17:00:03.099: INFO: Got endpoints: latency-svc-2475x [2.145809972s]
Jan  2 17:00:03.145: INFO: Created: latency-svc-5hvbb
Jan  2 17:00:03.162: INFO: Got endpoints: latency-svc-5hvbb [2.194955109s]
Jan  2 17:00:03.317: INFO: Created: latency-svc-ldxx7
Jan  2 17:00:03.318: INFO: Got endpoints: latency-svc-ldxx7 [2.196480689s]
Jan  2 17:00:03.400: INFO: Created: latency-svc-rtfr8
Jan  2 17:00:03.570: INFO: Got endpoints: latency-svc-rtfr8 [2.302661742s]
Jan  2 17:00:03.621: INFO: Created: latency-svc-c4bmj
Jan  2 17:00:03.654: INFO: Got endpoints: latency-svc-c4bmj [2.345378349s]
Jan  2 17:00:03.759: INFO: Created: latency-svc-r699t
Jan  2 17:00:03.795: INFO: Got endpoints: latency-svc-r699t [2.343340087s]
Jan  2 17:00:03.941: INFO: Created: latency-svc-txn4r
Jan  2 17:00:03.964: INFO: Got endpoints: latency-svc-txn4r [2.433223908s]
Jan  2 17:00:04.029: INFO: Created: latency-svc-rbzwd
Jan  2 17:00:04.160: INFO: Got endpoints: latency-svc-rbzwd [2.327618758s]
Jan  2 17:00:04.201: INFO: Created: latency-svc-79gwn
Jan  2 17:00:04.218: INFO: Got endpoints: latency-svc-79gwn [2.280971072s]
Jan  2 17:00:04.356: INFO: Created: latency-svc-g4j27
Jan  2 17:00:04.369: INFO: Got endpoints: latency-svc-g4j27 [2.347920691s]
Jan  2 17:00:04.543: INFO: Created: latency-svc-4bv4p
Jan  2 17:00:04.593: INFO: Created: latency-svc-5qknd
Jan  2 17:00:04.593: INFO: Got endpoints: latency-svc-4bv4p [2.383141256s]
Jan  2 17:00:04.692: INFO: Got endpoints: latency-svc-5qknd [2.324287441s]
Jan  2 17:00:04.716: INFO: Created: latency-svc-c927k
Jan  2 17:00:04.734: INFO: Got endpoints: latency-svc-c927k [2.28904458s]
Jan  2 17:00:04.778: INFO: Created: latency-svc-4r5gz
Jan  2 17:00:04.793: INFO: Got endpoints: latency-svc-4r5gz [2.157217814s]
Jan  2 17:00:04.920: INFO: Created: latency-svc-kc87v
Jan  2 17:00:04.992: INFO: Got endpoints: latency-svc-kc87v [2.141489166s]
Jan  2 17:00:05.094: INFO: Created: latency-svc-dzj92
Jan  2 17:00:05.103: INFO: Got endpoints: latency-svc-dzj92 [2.004212374s]
Jan  2 17:00:05.156: INFO: Created: latency-svc-zzm7f
Jan  2 17:00:05.294: INFO: Got endpoints: latency-svc-zzm7f [2.13228582s]
Jan  2 17:00:05.297: INFO: Created: latency-svc-hrvhf
Jan  2 17:00:05.310: INFO: Got endpoints: latency-svc-hrvhf [1.991787659s]
Jan  2 17:00:05.385: INFO: Created: latency-svc-wlmp2
Jan  2 17:00:05.499: INFO: Got endpoints: latency-svc-wlmp2 [1.928559689s]
Jan  2 17:00:05.595: INFO: Created: latency-svc-dlz9x
Jan  2 17:00:05.680: INFO: Got endpoints: latency-svc-dlz9x [2.025615099s]
Jan  2 17:00:05.724: INFO: Created: latency-svc-s72v9
Jan  2 17:00:05.753: INFO: Got endpoints: latency-svc-s72v9 [1.957518127s]
Jan  2 17:00:05.938: INFO: Created: latency-svc-79fqc
Jan  2 17:00:05.938: INFO: Got endpoints: latency-svc-79fqc [1.974276899s]
Jan  2 17:00:06.009: INFO: Created: latency-svc-6lkfj
Jan  2 17:00:06.118: INFO: Got endpoints: latency-svc-6lkfj [1.957503438s]
Jan  2 17:00:06.156: INFO: Created: latency-svc-zv4bp
Jan  2 17:00:06.183: INFO: Got endpoints: latency-svc-zv4bp [1.965160055s]
Jan  2 17:00:06.360: INFO: Created: latency-svc-td624
Jan  2 17:00:06.387: INFO: Got endpoints: latency-svc-td624 [2.017780349s]
Jan  2 17:00:06.624: INFO: Created: latency-svc-fsv8q
Jan  2 17:00:06.629: INFO: Got endpoints: latency-svc-fsv8q [2.035343839s]
Jan  2 17:00:06.805: INFO: Created: latency-svc-lgj5t
Jan  2 17:00:06.831: INFO: Got endpoints: latency-svc-lgj5t [2.138399199s]
Jan  2 17:00:06.937: INFO: Created: latency-svc-q8l2f
Jan  2 17:00:06.961: INFO: Got endpoints: latency-svc-q8l2f [2.226406857s]
Jan  2 17:00:07.037: INFO: Created: latency-svc-868m7
Jan  2 17:00:07.145: INFO: Got endpoints: latency-svc-868m7 [2.351655232s]
Jan  2 17:00:07.188: INFO: Created: latency-svc-n47lj
Jan  2 17:00:07.230: INFO: Got endpoints: latency-svc-n47lj [2.237153685s]
Jan  2 17:00:07.373: INFO: Created: latency-svc-crtb8
Jan  2 17:00:07.382: INFO: Got endpoints: latency-svc-crtb8 [2.278374637s]
Jan  2 17:00:07.630: INFO: Created: latency-svc-8whbc
Jan  2 17:00:07.630: INFO: Got endpoints: latency-svc-8whbc [2.334827165s]
Jan  2 17:00:07.676: INFO: Created: latency-svc-6mlsw
Jan  2 17:00:07.680: INFO: Got endpoints: latency-svc-6mlsw [2.370186931s]
Jan  2 17:00:08.967: INFO: Created: latency-svc-wg2x2
Jan  2 17:00:09.013: INFO: Got endpoints: latency-svc-wg2x2 [3.513878019s]
Jan  2 17:00:09.175: INFO: Created: latency-svc-rxqjl
Jan  2 17:00:09.199: INFO: Got endpoints: latency-svc-rxqjl [3.518919744s]
Jan  2 17:00:09.254: INFO: Created: latency-svc-p69f4
Jan  2 17:00:09.369: INFO: Got endpoints: latency-svc-p69f4 [3.61585832s]
Jan  2 17:00:09.390: INFO: Created: latency-svc-fzm6m
Jan  2 17:00:09.445: INFO: Got endpoints: latency-svc-fzm6m [3.507176044s]
Jan  2 17:00:09.694: INFO: Created: latency-svc-smsjq
Jan  2 17:00:09.695: INFO: Got endpoints: latency-svc-smsjq [3.576319913s]
Jan  2 17:00:09.740: INFO: Created: latency-svc-jljcv
Jan  2 17:00:09.891: INFO: Got endpoints: latency-svc-jljcv [3.708004263s]
Jan  2 17:00:10.024: INFO: Created: latency-svc-fh8j2
Jan  2 17:00:10.106: INFO: Got endpoints: latency-svc-fh8j2 [3.71889508s]
Jan  2 17:00:10.115: INFO: Created: latency-svc-8x4rt
Jan  2 17:00:10.135: INFO: Got endpoints: latency-svc-8x4rt [3.505491648s]
Jan  2 17:00:10.299: INFO: Created: latency-svc-8lh77
Jan  2 17:00:10.359: INFO: Got endpoints: latency-svc-8lh77 [3.528453196s]
Jan  2 17:00:10.399: INFO: Created: latency-svc-cn5dh
Jan  2 17:00:10.477: INFO: Got endpoints: latency-svc-cn5dh [3.515182903s]
Jan  2 17:00:10.527: INFO: Created: latency-svc-hf7qf
Jan  2 17:00:10.663: INFO: Got endpoints: latency-svc-hf7qf [3.517826947s]
Jan  2 17:00:10.704: INFO: Created: latency-svc-2xndc
Jan  2 17:00:10.713: INFO: Got endpoints: latency-svc-2xndc [3.482651032s]
Jan  2 17:00:10.843: INFO: Created: latency-svc-fn4rn
Jan  2 17:00:10.917: INFO: Created: latency-svc-nlpn2
Jan  2 17:00:11.037: INFO: Got endpoints: latency-svc-fn4rn [3.654779654s]
Jan  2 17:00:11.066: INFO: Created: latency-svc-lnv7k
Jan  2 17:00:11.081: INFO: Got endpoints: latency-svc-nlpn2 [3.451522343s]
Jan  2 17:00:11.081: INFO: Got endpoints: latency-svc-lnv7k [3.400745448s]
Jan  2 17:00:11.299: INFO: Created: latency-svc-xrb5t
Jan  2 17:00:11.323: INFO: Got endpoints: latency-svc-xrb5t [2.309478734s]
Jan  2 17:00:11.392: INFO: Created: latency-svc-5kvtj
Jan  2 17:00:11.460: INFO: Got endpoints: latency-svc-5kvtj [2.260712602s]
Jan  2 17:00:11.503: INFO: Created: latency-svc-vj2wl
Jan  2 17:00:11.517: INFO: Got endpoints: latency-svc-vj2wl [2.147637807s]
Jan  2 17:00:11.686: INFO: Created: latency-svc-qdjg8
Jan  2 17:00:11.701: INFO: Got endpoints: latency-svc-qdjg8 [2.255290316s]
Jan  2 17:00:11.918: INFO: Created: latency-svc-mzrqv
Jan  2 17:00:11.957: INFO: Got endpoints: latency-svc-mzrqv [2.261787902s]
Jan  2 17:00:12.103: INFO: Created: latency-svc-fh7xz
Jan  2 17:00:12.117: INFO: Got endpoints: latency-svc-fh7xz [2.225287627s]
Jan  2 17:00:12.292: INFO: Created: latency-svc-dm4vb
Jan  2 17:00:12.324: INFO: Got endpoints: latency-svc-dm4vb [2.217650216s]
Jan  2 17:00:12.376: INFO: Created: latency-svc-td2dw
Jan  2 17:00:12.527: INFO: Got endpoints: latency-svc-td2dw [2.392613979s]
Jan  2 17:00:12.554: INFO: Created: latency-svc-ktljv
Jan  2 17:00:12.595: INFO: Got endpoints: latency-svc-ktljv [2.235549622s]
Jan  2 17:00:12.760: INFO: Created: latency-svc-qtj8z
Jan  2 17:00:12.784: INFO: Got endpoints: latency-svc-qtj8z [2.306776595s]
Jan  2 17:00:12.895: INFO: Created: latency-svc-n8btq
Jan  2 17:00:12.928: INFO: Got endpoints: latency-svc-n8btq [2.264069128s]
Jan  2 17:00:13.047: INFO: Created: latency-svc-2gnjb
Jan  2 17:00:13.076: INFO: Got endpoints: latency-svc-2gnjb [2.362931619s]
Jan  2 17:00:13.196: INFO: Created: latency-svc-8xdxl
Jan  2 17:00:13.228: INFO: Got endpoints: latency-svc-8xdxl [2.190634288s]
Jan  2 17:00:13.299: INFO: Created: latency-svc-7ttkc
Jan  2 17:00:13.450: INFO: Got endpoints: latency-svc-7ttkc [2.367877323s]
Jan  2 17:00:13.510: INFO: Created: latency-svc-9rnk6
Jan  2 17:00:13.779: INFO: Got endpoints: latency-svc-9rnk6 [2.696900224s]
Jan  2 17:00:13.859: INFO: Created: latency-svc-xg8jh
Jan  2 17:00:14.018: INFO: Got endpoints: latency-svc-xg8jh [2.694583952s]
Jan  2 17:00:14.069: INFO: Created: latency-svc-ghvxg
Jan  2 17:00:14.099: INFO: Got endpoints: latency-svc-ghvxg [2.638773194s]
Jan  2 17:00:14.278: INFO: Created: latency-svc-qrhd6
Jan  2 17:00:14.337: INFO: Got endpoints: latency-svc-qrhd6 [2.819143021s]
Jan  2 17:00:14.537: INFO: Created: latency-svc-262t5
Jan  2 17:00:14.673: INFO: Got endpoints: latency-svc-262t5 [2.971884877s]
Jan  2 17:00:14.813: INFO: Created: latency-svc-2gfzt
Jan  2 17:00:14.845: INFO: Got endpoints: latency-svc-2gfzt [2.887618993s]
Jan  2 17:00:15.007: INFO: Created: latency-svc-zk8jh
Jan  2 17:00:15.024: INFO: Got endpoints: latency-svc-zk8jh [2.905896954s]
Jan  2 17:00:15.220: INFO: Created: latency-svc-5lbz9
Jan  2 17:00:15.224: INFO: Got endpoints: latency-svc-5lbz9 [2.89958626s]
Jan  2 17:00:15.428: INFO: Created: latency-svc-9wxvg
Jan  2 17:00:15.428: INFO: Got endpoints: latency-svc-9wxvg [2.900350743s]
Jan  2 17:00:15.634: INFO: Created: latency-svc-7djg2
Jan  2 17:00:15.651: INFO: Got endpoints: latency-svc-7djg2 [3.055599824s]
Jan  2 17:00:15.721: INFO: Created: latency-svc-pql6t
Jan  2 17:00:15.862: INFO: Got endpoints: latency-svc-pql6t [3.078144002s]
Jan  2 17:00:15.900: INFO: Created: latency-svc-dbxgk
Jan  2 17:00:15.940: INFO: Got endpoints: latency-svc-dbxgk [3.011995099s]
Jan  2 17:00:16.089: INFO: Created: latency-svc-p56ps
Jan  2 17:00:16.098: INFO: Got endpoints: latency-svc-p56ps [3.021111806s]
Jan  2 17:00:16.174: INFO: Created: latency-svc-vfp2q
Jan  2 17:00:16.328: INFO: Got endpoints: latency-svc-vfp2q [3.098735663s]
Jan  2 17:00:16.357: INFO: Created: latency-svc-56t4q
Jan  2 17:00:16.420: INFO: Got endpoints: latency-svc-56t4q [2.969656787s]
Jan  2 17:00:16.625: INFO: Created: latency-svc-fzxzl
Jan  2 17:00:16.640: INFO: Got endpoints: latency-svc-fzxzl [2.86040155s]
Jan  2 17:00:16.749: INFO: Created: latency-svc-9ml52
Jan  2 17:00:16.803: INFO: Got endpoints: latency-svc-9ml52 [2.785208846s]
Jan  2 17:00:16.814: INFO: Created: latency-svc-jlhmw
Jan  2 17:00:16.825: INFO: Got endpoints: latency-svc-jlhmw [2.724949805s]
Jan  2 17:00:16.935: INFO: Created: latency-svc-d9slg
Jan  2 17:00:16.958: INFO: Got endpoints: latency-svc-d9slg [2.620708769s]
Jan  2 17:00:17.014: INFO: Created: latency-svc-jf5zk
Jan  2 17:00:17.092: INFO: Got endpoints: latency-svc-jf5zk [2.417427134s]
Jan  2 17:00:17.092: INFO: Latencies: [261.759489ms 301.250671ms 347.341538ms 501.700315ms 691.775141ms 730.155239ms 923.279289ms 968.088234ms 1.218985671s 1.70781592s 1.847593112s 1.87386099s 1.895913879s 1.908524424s 1.928559689s 1.957503438s 1.957518127s 1.965160055s 1.974276899s 1.983515724s 1.991787659s 1.996874306s 2.004212374s 2.017780349s 2.023236011s 2.025615099s 2.035343839s 2.099268342s 2.13228582s 2.138399199s 2.141489166s 2.145809972s 2.147637807s 2.157217814s 2.174104606s 2.190634288s 2.194955109s 2.196480689s 2.208487338s 2.217650216s 2.225287627s 2.226406857s 2.235549622s 2.237153685s 2.238773153s 2.255290316s 2.260712602s 2.261787902s 2.264069128s 2.278374637s 2.280971072s 2.28201191s 2.28904458s 2.302661742s 2.306776595s 2.30697432s 2.309478734s 2.324287441s 2.327618758s 2.334827165s 2.343340087s 2.345378349s 2.347920691s 2.351655232s 2.362931619s 2.367877323s 2.370186931s 2.383141256s 2.390610735s 2.392613979s 2.401418521s 2.406663491s 2.417427134s 2.433223908s 2.437009567s 2.446524424s 2.456697105s 2.45680881s 2.470226459s 2.473697069s 2.475545964s 2.480924851s 2.498218829s 2.501776093s 2.526223182s 2.588718233s 2.597157852s 2.600688547s 2.605182791s 2.610463373s 2.620708769s 2.628449136s 2.631495598s 2.638773194s 2.651207907s 2.659070395s 2.661369187s 2.665487268s 2.689617073s 2.694583952s 2.696900224s 2.699911322s 2.711548649s 2.713877298s 2.718742396s 2.721576901s 2.721892592s 2.724168872s 2.724949805s 2.727345012s 2.729722184s 2.733457383s 2.734786987s 2.749129304s 2.756550466s 2.759053907s 2.785208846s 2.802765368s 2.804375105s 2.805861413s 2.806283553s 2.815982992s 2.819143021s 2.837887222s 2.846795129s 2.85087286s 2.86040155s 2.887618993s 2.899277697s 2.899490241s 2.89958626s 2.900350743s 2.904446903s 2.905896954s 2.934253126s 2.936323257s 2.948353946s 2.969656787s 2.969921284s 2.971884877s 2.979468425s 2.996311846s 3.005709129s 3.011995099s 3.021111806s 3.022278811s 3.053890309s 3.055599824s 3.057542889s 3.078144002s 3.098735663s 3.099727862s 3.135161795s 3.138648638s 3.180799646s 3.18275626s 3.210285489s 3.220541726s 3.222119346s 3.229214912s 3.230149681s 3.24151049s 3.247417048s 3.250648578s 3.266342982s 3.274469853s 3.400745448s 3.40773691s 3.451522343s 3.456060985s 3.482399701s 3.482651032s 3.486983874s 3.505491648s 3.507176044s 3.513878019s 3.515182903s 3.517826947s 3.518919744s 3.528453196s 3.532657051s 3.562565158s 3.576319913s 3.581141554s 3.61585832s 3.62867868s 3.654779654s 3.708004263s 3.71889508s 3.86715252s 3.873744149s 3.920290044s 4.128211654s 4.261717881s 4.276528257s 4.332308622s 4.345695051s 4.373495705s 4.404091045s 4.497619194s]
Jan  2 17:00:17.092: INFO: 50 %ile: 2.696900224s
Jan  2 17:00:17.092: INFO: 90 %ile: 3.532657051s
Jan  2 17:00:17.092: INFO: 99 %ile: 4.404091045s
Jan  2 17:00:17.092: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:00:17.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-gvstl" for this suite.
Jan  2 17:01:15.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:01:15.217: INFO: namespace: e2e-tests-svc-latency-gvstl, resource: bindings, ignored listing per whitelist
Jan  2 17:01:15.329: INFO: namespace e2e-tests-svc-latency-gvstl deletion completed in 58.214868007s

• [SLOW TEST:104.616 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:01:15.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  2 17:01:15.597: INFO: Waiting up to 5m0s for pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-kvpw8" to be "success or failure"
Jan  2 17:01:15.607: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.372214ms
Jan  2 17:01:17.632: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035406511s
Jan  2 17:01:19.666: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069681414s
Jan  2 17:01:21.747: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149910268s
Jan  2 17:01:24.035: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.438396414s
Jan  2 17:01:26.051: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.453828261s
STEP: Saw pod success
Jan  2 17:01:26.051: INFO: Pod "pod-7cc7dbef-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:01:26.057: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7cc7dbef-2d81-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:01:26.981: INFO: Waiting for pod pod-7cc7dbef-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 17:01:26.992: INFO: Pod pod-7cc7dbef-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:01:26.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kvpw8" for this suite.
Jan  2 17:01:33.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:01:33.252: INFO: namespace: e2e-tests-emptydir-kvpw8, resource: bindings, ignored listing per whitelist
Jan  2 17:01:33.351: INFO: namespace e2e-tests-emptydir-kvpw8 deletion completed in 6.350967783s

• [SLOW TEST:18.022 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:01:33.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 17:02:07.723: INFO: Container started at 2020-01-02 17:01:42 +0000 UTC, pod became ready at 2020-01-02 17:02:05 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:02:07.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5jb7v" for this suite.
Jan  2 17:02:31.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:02:31.873: INFO: namespace: e2e-tests-container-probe-5jb7v, resource: bindings, ignored listing per whitelist
Jan  2 17:02:31.996: INFO: namespace e2e-tests-container-probe-5jb7v deletion completed in 24.261478308s

• [SLOW TEST:58.645 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:02:31.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  2 17:02:32.775: INFO: created pod pod-service-account-defaultsa
Jan  2 17:02:32.775: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  2 17:02:32.805: INFO: created pod pod-service-account-mountsa
Jan  2 17:02:32.805: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  2 17:02:32.905: INFO: created pod pod-service-account-nomountsa
Jan  2 17:02:32.905: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  2 17:02:32.946: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  2 17:02:32.946: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  2 17:02:33.093: INFO: created pod pod-service-account-mountsa-mountspec
Jan  2 17:02:33.093: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  2 17:02:33.165: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  2 17:02:33.165: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  2 17:02:34.588: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  2 17:02:34.588: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  2 17:02:35.779: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  2 17:02:35.779: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  2 17:02:36.212: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  2 17:02:36.212: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:02:36.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-dc226" for this suite.
Jan  2 17:03:02.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:03:02.406: INFO: namespace: e2e-tests-svcaccounts-dc226, resource: bindings, ignored listing per whitelist
Jan  2 17:03:02.521: INFO: namespace e2e-tests-svcaccounts-dc226 deletion completed in 26.288905616s

• [SLOW TEST:30.525 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:03:02.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 17:03:02.885: INFO: Creating deployment "test-recreate-deployment"
Jan  2 17:03:02.896: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  2 17:03:02.909: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  2 17:03:04.976: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  2 17:03:04.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 17:03:07.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 17:03:09.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 17:03:11.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 17:03:12.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581383, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713581382, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 17:03:15.010: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  2 17:03:15.053: INFO: Updating deployment test-recreate-deployment
Jan  2 17:03:15.054: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 17:03:16.165: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-fsthn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsthn/deployments/test-recreate-deployment,UID:bcbee8e0-2d81-11ea-a994-fa163e34d433,ResourceVersion:16933849,Generation:2,CreationTimestamp:2020-01-02 17:03:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-02 17:03:15 +0000 UTC 2020-01-02 17:03:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-02 17:03:15 +0000 UTC 2020-01-02 17:03:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  2 17:03:16.177: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-fsthn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsthn/replicasets/test-recreate-deployment-589c4bfd,UID:c4487546-2d81-11ea-a994-fa163e34d433,ResourceVersion:16933846,Generation:1,CreationTimestamp:2020-01-02 17:03:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bcbee8e0-2d81-11ea-a994-fa163e34d433 0xc002252acf 0xc002252ae0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 17:03:16.177: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  2 17:03:16.177: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-fsthn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fsthn/replicasets/test-recreate-deployment-5bf7f65dc,UID:bcc2cf96-2d81-11ea-a994-fa163e34d433,ResourceVersion:16933838,Generation:2,CreationTimestamp:2020-01-02 17:03:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bcbee8e0-2d81-11ea-a994-fa163e34d433 0xc002252bb0 0xc002252bb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 17:03:16.777: INFO: Pod "test-recreate-deployment-589c4bfd-cml2t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-cml2t,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-fsthn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fsthn/pods/test-recreate-deployment-589c4bfd-cml2t,UID:c4504fd1-2d81-11ea-a994-fa163e34d433,ResourceVersion:16933850,Generation:0,CreationTimestamp:2020-01-02 17:03:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd c4487546-2d81-11ea-a994-fa163e34d433 0xc00225346f 0xc002253480}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pdtl7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pdtl7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pdtl7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022534e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002253500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 17:03:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 17:03:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 17:03:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 17:03:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 17:03:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:03:16.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-fsthn" for this suite.
Jan  2 17:03:28.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:03:28.154: INFO: namespace: e2e-tests-deployment-fsthn, resource: bindings, ignored listing per whitelist
Jan  2 17:03:28.284: INFO: namespace e2e-tests-deployment-fsthn deletion completed in 10.654233965s

• [SLOW TEST:25.761 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:03:28.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c5xqj in namespace e2e-tests-proxy-dwxpj
I0102 17:03:28.712901       8 runners.go:184] Created replication controller with name: proxy-service-c5xqj, namespace: e2e-tests-proxy-dwxpj, replica count: 1
I0102 17:03:29.764121       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:30.764455       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:31.764831       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:32.765267       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:33.765794       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:34.766267       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:35.766848       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:36.767536       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:37.768243       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0102 17:03:38.768904       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:39.769760       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:40.771602       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:41.776371       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:42.776946       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:43.777617       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0102 17:03:44.778497       8 runners.go:184] proxy-service-c5xqj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  2 17:03:44.799: INFO: setup took 16.19081298s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  2 17:03:44.827: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dwxpj/services/http:proxy-service-c5xqj:portname2/proxy/: bar (200; 27.919918ms)
Jan  2 17:03:44.830: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dwxpj/pods/proxy-service-c5xqj-fb4nf/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 17:03:57.836: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:03:59.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-9hmbl" for this suite.
Jan  2 17:04:05.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:04:05.222: INFO: namespace: e2e-tests-custom-resource-definition-9hmbl, resource: bindings, ignored listing per whitelist
Jan  2 17:04:05.388: INFO: namespace e2e-tests-custom-resource-definition-9hmbl deletion completed in 6.335319038s

• [SLOW TEST:7.748 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:04:05.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 17:04:18.304: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e224255b-2d81-11ea-b611-0242ac110005"
Jan  2 17:04:18.305: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e224255b-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-pods-b92mm" to be "terminated due to deadline exceeded"
Jan  2 17:04:18.400: INFO: Pod "pod-update-activedeadlineseconds-e224255b-2d81-11ea-b611-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 95.203508ms
Jan  2 17:04:20.427: INFO: Pod "pod-update-activedeadlineseconds-e224255b-2d81-11ea-b611-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.122451229s
Jan  2 17:04:20.427: INFO: Pod "pod-update-activedeadlineseconds-e224255b-2d81-11ea-b611-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:04:20.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-b92mm" for this suite.
Jan  2 17:04:26.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:04:26.672: INFO: namespace: e2e-tests-pods-b92mm, resource: bindings, ignored listing per whitelist
Jan  2 17:04:26.792: INFO: namespace e2e-tests-pods-b92mm deletion completed in 6.35304933s

• [SLOW TEST:21.404 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:04:26.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  2 17:04:26.935: INFO: Waiting up to 5m0s for pod "pod-eed61da8-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-h8xdx" to be "success or failure"
Jan  2 17:04:26.985: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.317079ms
Jan  2 17:04:29.551: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.616242049s
Jan  2 17:04:31.570: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634835089s
Jan  2 17:04:33.843: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.907701999s
Jan  2 17:04:35.877: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.942198363s
Jan  2 17:04:37.900: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.964779225s
STEP: Saw pod success
Jan  2 17:04:37.900: INFO: Pod "pod-eed61da8-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:04:37.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eed61da8-2d81-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:04:38.130: INFO: Waiting for pod pod-eed61da8-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 17:04:38.140: INFO: Pod pod-eed61da8-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:04:38.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h8xdx" for this suite.
Jan  2 17:04:46.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:04:46.390: INFO: namespace: e2e-tests-emptydir-h8xdx, resource: bindings, ignored listing per whitelist
Jan  2 17:04:47.034: INFO: namespace e2e-tests-emptydir-h8xdx deletion completed in 8.883936387s

• [SLOW TEST:20.242 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:04:47.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-kd9vx/configmap-test-faf90808-2d81-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 17:04:47.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-kd9vx" to be "success or failure"
Jan  2 17:04:47.328: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.478178ms
Jan  2 17:04:49.356: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044899047s
Jan  2 17:04:51.387: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076125637s
Jan  2 17:04:53.422: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110761613s
Jan  2 17:04:55.447: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135466521s
Jan  2 17:04:57.470: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158675619s
Jan  2 17:04:59.487: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.176088552s
STEP: Saw pod success
Jan  2 17:04:59.487: INFO: Pod "pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:04:59.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 17:04:59.573: INFO: Waiting for pod pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005 to disappear
Jan  2 17:04:59.647: INFO: Pod pod-configmaps-fafa0ee4-2d81-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:04:59.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kd9vx" for this suite.
Jan  2 17:05:05.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:05:05.847: INFO: namespace: e2e-tests-configmap-kd9vx, resource: bindings, ignored listing per whitelist
Jan  2 17:05:05.970: INFO: namespace e2e-tests-configmap-kd9vx deletion completed in 6.307042369s

• [SLOW TEST:18.935 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:05:05.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:05:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-zqjqw" for this suite.
Jan  2 17:05:45.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:05:45.897: INFO: namespace: e2e-tests-replication-controller-zqjqw, resource: bindings, ignored listing per whitelist
Jan  2 17:05:45.915: INFO: namespace e2e-tests-replication-controller-zqjqw deletion completed in 26.349007736s

• [SLOW TEST:39.945 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:05:45.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:05:46.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-mtwln" to be "success or failure"
Jan  2 17:05:46.104: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.605483ms
Jan  2 17:05:48.237: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141168392s
Jan  2 17:05:50.260: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16357416s
Jan  2 17:05:52.274: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177710216s
Jan  2 17:05:54.304: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208375391s
Jan  2 17:05:56.336: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239786242s
STEP: Saw pod success
Jan  2 17:05:56.336: INFO: Pod "downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:05:56.346: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:05:56.605: INFO: Waiting for pod downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005 to disappear
Jan  2 17:05:56.618: INFO: Pod downwardapi-volume-1e03c420-2d82-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:05:56.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mtwln" for this suite.
Jan  2 17:06:02.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:06:02.773: INFO: namespace: e2e-tests-downward-api-mtwln, resource: bindings, ignored listing per whitelist
Jan  2 17:06:02.814: INFO: namespace e2e-tests-downward-api-mtwln deletion completed in 6.18867741s

• [SLOW TEST:16.900 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:06:02.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan  2 17:06:03.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:03.759: INFO: stderr: ""
Jan  2 17:06:03.759: INFO: stdout: "pod/pause created\n"
Jan  2 17:06:03.759: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  2 17:06:03.760: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-x7lbn" to be "running and ready"
Jan  2 17:06:03.786: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.097072ms
Jan  2 17:06:05.996: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236505743s
Jan  2 17:06:08.028: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268365863s
Jan  2 17:06:10.184: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424521058s
Jan  2 17:06:12.197: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.437305821s
Jan  2 17:06:12.197: INFO: Pod "pause" satisfied condition "running and ready"
Jan  2 17:06:12.197: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  2 17:06:12.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:12.427: INFO: stderr: ""
Jan  2 17:06:12.427: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  2 17:06:12.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:12.583: INFO: stderr: ""
Jan  2 17:06:12.583: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  2 17:06:12.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:12.707: INFO: stderr: ""
Jan  2 17:06:12.707: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  2 17:06:12.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:12.843: INFO: stderr: ""
Jan  2 17:06:12.843: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan  2 17:06:12.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:13.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:06:13.203: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  2 17:06:13.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-x7lbn'
Jan  2 17:06:13.503: INFO: stderr: "No resources found.\n"
Jan  2 17:06:13.504: INFO: stdout: ""
Jan  2 17:06:13.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-x7lbn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 17:06:13.662: INFO: stderr: ""
Jan  2 17:06:13.662: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:06:13.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x7lbn" for this suite.
Jan  2 17:06:21.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:06:21.795: INFO: namespace: e2e-tests-kubectl-x7lbn, resource: bindings, ignored listing per whitelist
Jan  2 17:06:21.935: INFO: namespace e2e-tests-kubectl-x7lbn deletion completed in 8.255537274s

• [SLOW TEST:19.120 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:06:21.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  2 17:06:22.240: INFO: Waiting up to 5m0s for pod "pod-338ffabf-2d82-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-t6lxp" to be "success or failure"
Jan  2 17:06:22.334: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.296695ms
Jan  2 17:06:24.429: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188622129s
Jan  2 17:06:26.465: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225500038s
Jan  2 17:06:28.522: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282264996s
Jan  2 17:06:30.572: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331581725s
Jan  2 17:06:32.601: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.361149059s
STEP: Saw pod success
Jan  2 17:06:32.601: INFO: Pod "pod-338ffabf-2d82-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:06:32.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-338ffabf-2d82-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:06:32.699: INFO: Waiting for pod pod-338ffabf-2d82-11ea-b611-0242ac110005 to disappear
Jan  2 17:06:32.706: INFO: Pod pod-338ffabf-2d82-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:06:32.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-t6lxp" for this suite.
Jan  2 17:06:40.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:06:40.776: INFO: namespace: e2e-tests-emptydir-t6lxp, resource: bindings, ignored listing per whitelist
Jan  2 17:06:40.904: INFO: namespace e2e-tests-emptydir-t6lxp deletion completed in 8.190675819s

• [SLOW TEST:18.969 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:06:40.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-3ee27690-2d82-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 17:06:41.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-fjfzt" to be "success or failure"
Jan  2 17:06:41.274: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.650447ms
Jan  2 17:06:43.292: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028918769s
Jan  2 17:06:45.326: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062225572s
Jan  2 17:06:47.562: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298968871s
Jan  2 17:06:49.575: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311264458s
Jan  2 17:06:51.585: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321950766s
STEP: Saw pod success
Jan  2 17:06:51.585: INFO: Pod "pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:06:51.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 17:06:52.194: INFO: Waiting for pod pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005 to disappear
Jan  2 17:06:52.415: INFO: Pod pod-configmaps-3ee3c825-2d82-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:06:52.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fjfzt" for this suite.
Jan  2 17:06:58.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:06:58.679: INFO: namespace: e2e-tests-configmap-fjfzt, resource: bindings, ignored listing per whitelist
Jan  2 17:06:58.719: INFO: namespace e2e-tests-configmap-fjfzt deletion completed in 6.285332129s

• [SLOW TEST:17.814 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:06:58.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gs62w.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gs62w.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  2 17:07:15.428: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.500: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.537: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.564: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.583: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.594: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.624: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.655: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.678: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.687: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.910: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.925: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.938: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.945: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005: the server could not find the requested resource (get pods dns-test-49731d60-2d82-11ea-b611-0242ac110005)
Jan  2 17:07:15.945: INFO: Lookups using e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gs62w.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  2 17:07:21.131: INFO: DNS probes using e2e-tests-dns-gs62w/dns-test-49731d60-2d82-11ea-b611-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:07:21.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-gs62w" for this suite.
Jan  2 17:07:29.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:07:29.693: INFO: namespace: e2e-tests-dns-gs62w, resource: bindings, ignored listing per whitelist
Jan  2 17:07:29.703: INFO: namespace e2e-tests-dns-gs62w deletion completed in 8.371706363s

• [SLOW TEST:30.984 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:07:29.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:07:29.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-4lm9s" to be "success or failure"
Jan  2 17:07:29.931: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.782344ms
Jan  2 17:07:31.945: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035511147s
Jan  2 17:07:33.975: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065613649s
Jan  2 17:07:35.997: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087634512s
Jan  2 17:07:38.012: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10242895s
Jan  2 17:07:40.171: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261310989s
Jan  2 17:07:42.186: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.276791612s
STEP: Saw pod success
Jan  2 17:07:42.187: INFO: Pod "downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:07:42.193: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:07:42.321: INFO: Waiting for pod downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005 to disappear
Jan  2 17:07:42.353: INFO: Pod downwardapi-volume-5be3d1bd-2d82-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:07:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4lm9s" for this suite.
Jan  2 17:07:48.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:07:48.684: INFO: namespace: e2e-tests-downward-api-4lm9s, resource: bindings, ignored listing per whitelist
Jan  2 17:07:48.743: INFO: namespace e2e-tests-downward-api-4lm9s deletion completed in 6.35897171s

• [SLOW TEST:19.040 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:07:48.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rrllc
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 17:07:48.936: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 17:08:27.242: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-rrllc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 17:08:27.242: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 17:08:28.809: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:08:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rrllc" for this suite.
Jan  2 17:08:54.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:08:54.963: INFO: namespace: e2e-tests-pod-network-test-rrllc, resource: bindings, ignored listing per whitelist
Jan  2 17:08:55.031: INFO: namespace e2e-tests-pod-network-test-rrllc deletion completed in 26.182949587s

• [SLOW TEST:66.288 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:08:55.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8ec2a2b0-2d82-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 17:08:55.274: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-zj5lg" to be "success or failure"
Jan  2 17:08:55.355: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.914535ms
Jan  2 17:08:57.440: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165561349s
Jan  2 17:08:59.470: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195728934s
Jan  2 17:09:01.575: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299977321s
Jan  2 17:09:03.611: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336762776s
Jan  2 17:09:05.644: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.369179768s
STEP: Saw pod success
Jan  2 17:09:05.644: INFO: Pod "pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:09:05.657: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 17:09:05.885: INFO: Waiting for pod pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005 to disappear
Jan  2 17:09:05.935: INFO: Pod pod-projected-configmaps-8ec44fdc-2d82-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:09:05.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zj5lg" for this suite.
Jan  2 17:09:12.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:09:12.099: INFO: namespace: e2e-tests-projected-zj5lg, resource: bindings, ignored listing per whitelist
Jan  2 17:09:12.180: INFO: namespace e2e-tests-projected-zj5lg deletion completed in 6.217658259s

• [SLOW TEST:17.148 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:09:12.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  2 17:09:22.874: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:10:10.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-4qbsq" for this suite.
Jan  2 17:10:16.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:10:16.404: INFO: namespace: e2e-tests-namespaces-4qbsq, resource: bindings, ignored listing per whitelist
Jan  2 17:10:16.579: INFO: namespace e2e-tests-namespaces-4qbsq deletion completed in 6.254982646s
STEP: Destroying namespace "e2e-tests-nsdeletetest-vbmfh" for this suite.
Jan  2 17:10:16.584: INFO: Namespace e2e-tests-nsdeletetest-vbmfh was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-vb5n4" for this suite.
Jan  2 17:10:22.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:10:22.769: INFO: namespace: e2e-tests-nsdeletetest-vb5n4, resource: bindings, ignored listing per whitelist
Jan  2 17:10:22.830: INFO: namespace e2e-tests-nsdeletetest-vb5n4 deletion completed in 6.246446318s

• [SLOW TEST:70.650 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:10:22.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  2 17:13:26.378: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:26.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:28.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:28.449: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:30.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:30.457: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:32.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:32.454: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:34.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:34.497: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:36.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:36.451: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:38.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:38.445: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:40.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:40.461: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:42.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:42.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:44.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:44.459: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:46.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:46.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:48.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:48.449: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:50.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:50.448: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:52.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:52.470: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:54.430: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:54.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:56.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:56.792: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:13:58.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:13:58.451: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:14:00.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:14:00.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  2 17:14:02.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  2 17:14:02.447: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:14:02.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-95bwt" for this suite.
Jan  2 17:14:26.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:14:26.707: INFO: namespace: e2e-tests-container-lifecycle-hook-95bwt, resource: bindings, ignored listing per whitelist
Jan  2 17:14:26.725: INFO: namespace e2e-tests-container-lifecycle-hook-95bwt deletion completed in 24.266126658s

• [SLOW TEST:243.895 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:14:26.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  2 17:14:27.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:30.039: INFO: stderr: ""
Jan  2 17:14:30.039: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 17:14:30.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:30.233: INFO: stderr: ""
Jan  2 17:14:30.234: INFO: stdout: "update-demo-nautilus-2sxbm "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan  2 17:14:35.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:35.398: INFO: stderr: ""
Jan  2 17:14:35.398: INFO: stdout: "update-demo-nautilus-2sxbm update-demo-nautilus-n9sj7 "
Jan  2 17:14:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sxbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:35.557: INFO: stderr: ""
Jan  2 17:14:35.557: INFO: stdout: ""
Jan  2 17:14:35.557: INFO: update-demo-nautilus-2sxbm is created but not running
Jan  2 17:14:40.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:40.726: INFO: stderr: ""
Jan  2 17:14:40.726: INFO: stdout: "update-demo-nautilus-2sxbm update-demo-nautilus-n9sj7 "
Jan  2 17:14:40.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sxbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:40.845: INFO: stderr: ""
Jan  2 17:14:40.845: INFO: stdout: ""
Jan  2 17:14:40.845: INFO: update-demo-nautilus-2sxbm is created but not running
Jan  2 17:14:45.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:46.048: INFO: stderr: ""
Jan  2 17:14:46.049: INFO: stdout: "update-demo-nautilus-2sxbm update-demo-nautilus-n9sj7 "
Jan  2 17:14:46.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sxbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:46.203: INFO: stderr: ""
Jan  2 17:14:46.203: INFO: stdout: "true"
Jan  2 17:14:46.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2sxbm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:46.358: INFO: stderr: ""
Jan  2 17:14:46.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 17:14:46.358: INFO: validating pod update-demo-nautilus-2sxbm
Jan  2 17:14:46.381: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 17:14:46.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 17:14:46.381: INFO: update-demo-nautilus-2sxbm is verified up and running
Jan  2 17:14:46.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9sj7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:46.537: INFO: stderr: ""
Jan  2 17:14:46.537: INFO: stdout: "true"
Jan  2 17:14:46.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9sj7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:14:46.717: INFO: stderr: ""
Jan  2 17:14:46.717: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 17:14:46.717: INFO: validating pod update-demo-nautilus-n9sj7
Jan  2 17:14:46.735: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 17:14:46.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 17:14:46.735: INFO: update-demo-nautilus-n9sj7 is verified up and running
STEP: rolling-update to new replication controller
Jan  2 17:14:46.738: INFO: scanned /root for discovery docs: 
Jan  2 17:14:46.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:21.927: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 17:15:21.927: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 17:15:21.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:22.259: INFO: stderr: ""
Jan  2 17:15:22.259: INFO: stdout: "update-demo-kitten-2cwjf update-demo-kitten-nrgzp "
Jan  2 17:15:22.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2cwjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:22.427: INFO: stderr: ""
Jan  2 17:15:22.427: INFO: stdout: "true"
Jan  2 17:15:22.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2cwjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:22.586: INFO: stderr: ""
Jan  2 17:15:22.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 17:15:22.587: INFO: validating pod update-demo-kitten-2cwjf
Jan  2 17:15:22.625: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 17:15:22.625: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 17:15:22.625: INFO: update-demo-kitten-2cwjf is verified up and running
Jan  2 17:15:22.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nrgzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:22.859: INFO: stderr: ""
Jan  2 17:15:22.859: INFO: stdout: "true"
Jan  2 17:15:22.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nrgzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jmr49'
Jan  2 17:15:23.021: INFO: stderr: ""
Jan  2 17:15:23.021: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  2 17:15:23.021: INFO: validating pod update-demo-kitten-nrgzp
Jan  2 17:15:23.040: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  2 17:15:23.040: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  2 17:15:23.040: INFO: update-demo-kitten-nrgzp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:15:23.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jmr49" for this suite.
Jan  2 17:15:49.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:15:49.168: INFO: namespace: e2e-tests-kubectl-jmr49, resource: bindings, ignored listing per whitelist
Jan  2 17:15:49.407: INFO: namespace e2e-tests-kubectl-jmr49 deletion completed in 26.359048302s

• [SLOW TEST:82.681 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:15:49.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:15:49.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-k8r58" to be "success or failure"
Jan  2 17:15:49.699: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.703824ms
Jan  2 17:15:51.961: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276474601s
Jan  2 17:15:53.983: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29830108s
Jan  2 17:15:56.081: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395868072s
Jan  2 17:15:58.100: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415361198s
Jan  2 17:16:00.114: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.429093425s
STEP: Saw pod success
Jan  2 17:16:00.114: INFO: Pod "downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:16:00.118: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:16:00.318: INFO: Waiting for pod downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005 to disappear
Jan  2 17:16:00.337: INFO: Pod downwardapi-volume-85c902d0-2d83-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:16:00.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k8r58" for this suite.
Jan  2 17:16:06.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:16:06.744: INFO: namespace: e2e-tests-downward-api-k8r58, resource: bindings, ignored listing per whitelist
Jan  2 17:16:06.770: INFO: namespace e2e-tests-downward-api-k8r58 deletion completed in 6.419527238s

• [SLOW TEST:17.363 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:16:06.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-901dd95c-2d83-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 17:16:07.066: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-26zf9" to be "success or failure"
Jan  2 17:16:07.074: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.827598ms
Jan  2 17:16:09.087: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020409935s
Jan  2 17:16:11.108: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041208904s
Jan  2 17:16:13.666: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599437025s
Jan  2 17:16:15.694: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.627600876s
Jan  2 17:16:17.713: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.646762382s
STEP: Saw pod success
Jan  2 17:16:17.713: INFO: Pod "pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:16:17.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 17:16:18.977: INFO: Waiting for pod pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005 to disappear
Jan  2 17:16:18.987: INFO: Pod pod-projected-configmaps-901f9428-2d83-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:16:18.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-26zf9" for this suite.
Jan  2 17:16:25.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:16:25.339: INFO: namespace: e2e-tests-projected-26zf9, resource: bindings, ignored listing per whitelist
Jan  2 17:16:25.431: INFO: namespace e2e-tests-projected-26zf9 deletion completed in 6.430196547s

• [SLOW TEST:18.660 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:16:25.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-s8nd7
Jan  2 17:16:35.699: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-s8nd7
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 17:16:35.708: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:20:36.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-s8nd7" for this suite.
Jan  2 17:20:42.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:20:42.952: INFO: namespace: e2e-tests-container-probe-s8nd7, resource: bindings, ignored listing per whitelist
Jan  2 17:20:42.979: INFO: namespace e2e-tests-container-probe-s8nd7 deletion completed in 6.238533437s

• [SLOW TEST:257.548 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:20:42.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:20:43.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-hlztj" to be "success or failure"
Jan  2 17:20:43.304: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.011655ms
Jan  2 17:20:45.854: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.563595817s
Jan  2 17:20:47.905: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613985604s
Jan  2 17:20:50.202: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.911191466s
Jan  2 17:20:52.229: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937998222s
Jan  2 17:20:54.250: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.959337622s
STEP: Saw pod success
Jan  2 17:20:54.250: INFO: Pod "downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:20:54.258: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:20:54.415: INFO: Waiting for pod downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005 to disappear
Jan  2 17:20:54.430: INFO: Pod downwardapi-volume-34c90b86-2d84-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:20:54.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hlztj" for this suite.
Jan  2 17:21:00.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:21:00.536: INFO: namespace: e2e-tests-downward-api-hlztj, resource: bindings, ignored listing per whitelist
Jan  2 17:21:00.711: INFO: namespace e2e-tests-downward-api-hlztj deletion completed in 6.270715552s

• [SLOW TEST:17.731 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:21:00.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2gl5s
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  2 17:21:00.950: INFO: Found 0 stateful pods, waiting for 3
Jan  2 17:21:10.973: INFO: Found 1 stateful pods, waiting for 3
Jan  2 17:21:20.977: INFO: Found 2 stateful pods, waiting for 3
Jan  2 17:21:30.978: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:21:30.979: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:21:30.979: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 17:21:40.971: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:21:40.971: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:21:40.971: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 17:21:41.019: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  2 17:21:51.094: INFO: Updating stateful set ss2
Jan  2 17:21:51.176: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:22:01.203: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  2 17:22:11.745: INFO: Found 2 stateful pods, waiting for 3
Jan  2 17:22:22.255: INFO: Found 2 stateful pods, waiting for 3
Jan  2 17:22:31.764: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:22:31.764: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:22:31.765: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 17:22:41.772: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:22:41.772: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:22:41.772: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  2 17:22:41.844: INFO: Updating stateful set ss2
Jan  2 17:22:41.914: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:22:51.980: INFO: Updating stateful set ss2
Jan  2 17:22:52.086: INFO: Waiting for StatefulSet e2e-tests-statefulset-2gl5s/ss2 to complete update
Jan  2 17:22:52.086: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:23:02.103: INFO: Waiting for StatefulSet e2e-tests-statefulset-2gl5s/ss2 to complete update
Jan  2 17:23:02.103: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:23:12.110: INFO: Waiting for StatefulSet e2e-tests-statefulset-2gl5s/ss2 to complete update
Jan  2 17:23:12.110: INFO: Waiting for Pod e2e-tests-statefulset-2gl5s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:23:22.111: INFO: Waiting for StatefulSet e2e-tests-statefulset-2gl5s/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 17:23:32.108: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2gl5s
Jan  2 17:23:32.114: INFO: Scaling statefulset ss2 to 0
Jan  2 17:24:12.197: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 17:24:12.207: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:24:12.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2gl5s" for this suite.
Jan  2 17:24:20.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:24:21.172: INFO: namespace: e2e-tests-statefulset-2gl5s, resource: bindings, ignored listing per whitelist
Jan  2 17:24:21.253: INFO: namespace e2e-tests-statefulset-2gl5s deletion completed in 8.933889361s

• [SLOW TEST:200.542 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:24:21.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b6def910-2d84-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b6def910-2d84-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:24:33.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-spsqc" for this suite.
Jan  2 17:24:58.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:24:58.182: INFO: namespace: e2e-tests-configmap-spsqc, resource: bindings, ignored listing per whitelist
Jan  2 17:24:58.200: INFO: namespace e2e-tests-configmap-spsqc deletion completed in 24.334555407s

• [SLOW TEST:36.947 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:24:58.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-ccd66b4e-2d84-11ea-b611-0242ac110005
STEP: Creating secret with name s-test-opt-upd-ccd66c28-2d84-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ccd66b4e-2d84-11ea-b611-0242ac110005
STEP: Updating secret s-test-opt-upd-ccd66c28-2d84-11ea-b611-0242ac110005
STEP: Creating secret with name s-test-opt-create-ccd66c6e-2d84-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:25:17.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2qvm7" for this suite.
Jan  2 17:25:41.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:25:41.205: INFO: namespace: e2e-tests-secrets-2qvm7, resource: bindings, ignored listing per whitelist
Jan  2 17:25:41.262: INFO: namespace e2e-tests-secrets-2qvm7 deletion completed in 24.190507069s

• [SLOW TEST:43.062 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:25:41.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  2 17:25:41.515: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936515,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 17:25:41.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936515,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  2 17:25:51.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936528,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 17:25:51.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936528,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  2 17:26:01.610: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936541,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 17:26:01.611: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936541,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  2 17:26:11.629: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936553,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 17:26:11.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-a,UID:e68c0ea4-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936553,Generation:0,CreationTimestamp:2020-01-02 17:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  2 17:26:21.659: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-b,UID:fe76e810-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936565,Generation:0,CreationTimestamp:2020-01-02 17:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 17:26:21.660: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-b,UID:fe76e810-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936565,Generation:0,CreationTimestamp:2020-01-02 17:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  2 17:26:31.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-b,UID:fe76e810-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936578,Generation:0,CreationTimestamp:2020-01-02 17:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 17:26:31.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-q97jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-q97jt/configmaps/e2e-watch-test-configmap-b,UID:fe76e810-2d84-11ea-a994-fa163e34d433,ResourceVersion:16936578,Generation:0,CreationTimestamp:2020-01-02 17:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:26:41.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-q97jt" for this suite.
Jan  2 17:26:47.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:26:47.870: INFO: namespace: e2e-tests-watch-q97jt, resource: bindings, ignored listing per whitelist
Jan  2 17:26:47.983: INFO: namespace e2e-tests-watch-q97jt deletion completed in 6.265569161s

• [SLOW TEST:66.720 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:26:47.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-fx2z
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 17:26:48.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fx2z" in namespace "e2e-tests-subpath-n8nkr" to be "success or failure"
Jan  2 17:26:48.419: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 46.105218ms
Jan  2 17:26:50.627: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254322352s
Jan  2 17:26:52.702: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32994954s
Jan  2 17:26:55.364: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.991230145s
Jan  2 17:26:57.387: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 9.01481187s
Jan  2 17:26:59.406: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 11.033716646s
Jan  2 17:27:01.912: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 13.539691202s
Jan  2 17:27:03.977: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Pending", Reason="", readiness=false. Elapsed: 15.604593169s
Jan  2 17:27:05.986: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 17.613802417s
Jan  2 17:27:08.012: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 19.639771356s
Jan  2 17:27:10.029: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 21.656778982s
Jan  2 17:27:12.054: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 23.682080751s
Jan  2 17:27:14.073: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 25.700857145s
Jan  2 17:27:16.090: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 27.717975556s
Jan  2 17:27:18.108: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 29.735738191s
Jan  2 17:27:20.132: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 31.7599901s
Jan  2 17:27:22.160: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Running", Reason="", readiness=false. Elapsed: 33.787268989s
Jan  2 17:27:24.186: INFO: Pod "pod-subpath-test-configmap-fx2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.813268659s
STEP: Saw pod success
Jan  2 17:27:24.186: INFO: Pod "pod-subpath-test-configmap-fx2z" satisfied condition "success or failure"
Jan  2 17:27:24.197: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-fx2z container test-container-subpath-configmap-fx2z: 
STEP: delete the pod
Jan  2 17:27:24.316: INFO: Waiting for pod pod-subpath-test-configmap-fx2z to disappear
Jan  2 17:27:24.628: INFO: Pod pod-subpath-test-configmap-fx2z no longer exists
STEP: Deleting pod pod-subpath-test-configmap-fx2z
Jan  2 17:27:24.628: INFO: Deleting pod "pod-subpath-test-configmap-fx2z" in namespace "e2e-tests-subpath-n8nkr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:27:24.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-n8nkr" for this suite.
Jan  2 17:27:30.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:27:30.917: INFO: namespace: e2e-tests-subpath-n8nkr, resource: bindings, ignored listing per whitelist
Jan  2 17:27:31.003: INFO: namespace e2e-tests-subpath-n8nkr deletion completed in 6.321910861s

• [SLOW TEST:43.019 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:27:31.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5ct2
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 17:27:31.543: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5ct2" in namespace "e2e-tests-subpath-mvq2h" to be "success or failure"
Jan  2 17:27:31.586: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.923039ms
Jan  2 17:27:33.786: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242740762s
Jan  2 17:27:35.809: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266627814s
Jan  2 17:27:37.846: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303582863s
Jan  2 17:27:39.875: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332647976s
Jan  2 17:27:42.061: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.518094166s
Jan  2 17:27:44.076: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.532768778s
Jan  2 17:27:46.120: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.577407612s
Jan  2 17:27:48.150: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 16.607430848s
Jan  2 17:27:50.177: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 18.634233365s
Jan  2 17:27:52.195: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 20.652208383s
Jan  2 17:27:54.207: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 22.664599284s
Jan  2 17:27:56.267: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 24.724591914s
Jan  2 17:27:58.283: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 26.740174543s
Jan  2 17:28:00.300: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 28.75680144s
Jan  2 17:28:02.319: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 30.775916434s
Jan  2 17:28:04.360: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Running", Reason="", readiness=false. Elapsed: 32.81754697s
Jan  2 17:28:06.383: INFO: Pod "pod-subpath-test-downwardapi-5ct2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.840566512s
STEP: Saw pod success
Jan  2 17:28:06.384: INFO: Pod "pod-subpath-test-downwardapi-5ct2" satisfied condition "success or failure"
Jan  2 17:28:06.390: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-5ct2 container test-container-subpath-downwardapi-5ct2: 
STEP: delete the pod
Jan  2 17:28:06.477: INFO: Waiting for pod pod-subpath-test-downwardapi-5ct2 to disappear
Jan  2 17:28:06.504: INFO: Pod pod-subpath-test-downwardapi-5ct2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5ct2
Jan  2 17:28:06.505: INFO: Deleting pod "pod-subpath-test-downwardapi-5ct2" in namespace "e2e-tests-subpath-mvq2h"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:28:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mvq2h" for this suite.
Jan  2 17:28:12.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:28:12.915: INFO: namespace: e2e-tests-subpath-mvq2h, resource: bindings, ignored listing per whitelist
Jan  2 17:28:12.948: INFO: namespace e2e-tests-subpath-mvq2h deletion completed in 6.307028651s

• [SLOW TEST:41.944 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:28:12.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:28:13.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-b7tnc" to be "success or failure"
Jan  2 17:28:13.445: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.599303ms
Jan  2 17:28:15.698: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3109326s
Jan  2 17:28:17.732: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344968665s
Jan  2 17:28:19.949: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562056797s
Jan  2 17:28:22.190: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80336873s
Jan  2 17:28:24.200: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.812831602s
Jan  2 17:28:26.219: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.832377507s
STEP: Saw pod success
Jan  2 17:28:26.220: INFO: Pod "downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:28:26.226: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:28:26.560: INFO: Waiting for pod downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005 to disappear
Jan  2 17:28:26.594: INFO: Pod downwardapi-volume-410000a8-2d85-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:28:26.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b7tnc" for this suite.
Jan  2 17:28:32.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:28:33.056: INFO: namespace: e2e-tests-projected-b7tnc, resource: bindings, ignored listing per whitelist
Jan  2 17:28:33.188: INFO: namespace e2e-tests-projected-b7tnc deletion completed in 6.399555712s

• [SLOW TEST:20.239 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:28:33.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-wpqp
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 17:28:33.470: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wpqp" in namespace "e2e-tests-subpath-rwkbk" to be "success or failure"
Jan  2 17:28:33.494: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 23.453617ms
Jan  2 17:28:35.525: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054922755s
Jan  2 17:28:37.556: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085548652s
Jan  2 17:28:39.598: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127760831s
Jan  2 17:28:41.614: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143722336s
Jan  2 17:28:43.658: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187301198s
Jan  2 17:28:45.690: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.219460503s
Jan  2 17:28:47.704: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.233106168s
Jan  2 17:28:49.819: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.348115773s
Jan  2 17:28:51.847: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 18.376455059s
Jan  2 17:28:53.941: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 20.470591258s
Jan  2 17:28:55.963: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 22.492264741s
Jan  2 17:28:57.979: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 24.50825394s
Jan  2 17:28:59.997: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 26.527071008s
Jan  2 17:29:02.014: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 28.543270117s
Jan  2 17:29:04.040: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 30.569367328s
Jan  2 17:29:06.053: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 32.582686006s
Jan  2 17:29:08.077: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Running", Reason="", readiness=false. Elapsed: 34.606098523s
Jan  2 17:29:10.114: INFO: Pod "pod-subpath-test-secret-wpqp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.643513295s
STEP: Saw pod success
Jan  2 17:29:10.114: INFO: Pod "pod-subpath-test-secret-wpqp" satisfied condition "success or failure"
Jan  2 17:29:10.125: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-wpqp container test-container-subpath-secret-wpqp: 
STEP: delete the pod
Jan  2 17:29:10.250: INFO: Waiting for pod pod-subpath-test-secret-wpqp to disappear
Jan  2 17:29:10.331: INFO: Pod pod-subpath-test-secret-wpqp no longer exists
STEP: Deleting pod pod-subpath-test-secret-wpqp
Jan  2 17:29:10.331: INFO: Deleting pod "pod-subpath-test-secret-wpqp" in namespace "e2e-tests-subpath-rwkbk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:29:10.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rwkbk" for this suite.
Jan  2 17:29:18.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:29:18.767: INFO: namespace: e2e-tests-subpath-rwkbk, resource: bindings, ignored listing per whitelist
Jan  2 17:29:18.926: INFO: namespace e2e-tests-subpath-rwkbk deletion completed in 8.57543402s

• [SLOW TEST:45.738 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:29:18.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  2 17:29:19.193: INFO: Waiting up to 5m0s for pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005" in namespace "e2e-tests-var-expansion-st9kh" to be "success or failure"
Jan  2 17:29:19.203: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.412866ms
Jan  2 17:29:21.475: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282524264s
Jan  2 17:29:23.481: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288296625s
Jan  2 17:29:25.792: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599171264s
Jan  2 17:29:27.849: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656555406s
Jan  2 17:29:29.875: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.682024311s
Jan  2 17:29:31.900: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.707284571s
STEP: Saw pod success
Jan  2 17:29:31.900: INFO: Pod "var-expansion-684979be-2d85-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:29:31.906: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-684979be-2d85-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 17:29:31.971: INFO: Waiting for pod var-expansion-684979be-2d85-11ea-b611-0242ac110005 to disappear
Jan  2 17:29:31.982: INFO: Pod var-expansion-684979be-2d85-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:29:31.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-st9kh" for this suite.
Jan  2 17:29:38.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:29:38.225: INFO: namespace: e2e-tests-var-expansion-st9kh, resource: bindings, ignored listing per whitelist
Jan  2 17:29:38.303: INFO: namespace e2e-tests-var-expansion-st9kh deletion completed in 6.208077341s

• [SLOW TEST:19.377 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:29:38.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  2 17:29:46.982: INFO: 2 pods remaining
Jan  2 17:29:46.983: INFO: 0 pods has nil DeletionTimestamp
Jan  2 17:29:46.983: INFO: 
Jan  2 17:29:47.279: INFO: 0 pods remaining
Jan  2 17:29:47.279: INFO: 0 pods has nil DeletionTimestamp
Jan  2 17:29:47.279: INFO: 
STEP: Gathering metrics
W0102 17:29:48.140047       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 17:29:48.140: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:29:48.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s5hcf" for this suite.
Jan  2 17:30:00.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:30:00.903: INFO: namespace: e2e-tests-gc-s5hcf, resource: bindings, ignored listing per whitelist
Jan  2 17:30:00.948: INFO: namespace e2e-tests-gc-s5hcf deletion completed in 12.797804531s

• [SLOW TEST:22.645 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:30:00.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 17:30:01.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zt5bd'
Jan  2 17:30:03.239: INFO: stderr: ""
Jan  2 17:30:03.240: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  2 17:30:03.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-zt5bd'
Jan  2 17:30:12.689: INFO: stderr: ""
Jan  2 17:30:12.690: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:30:12.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zt5bd" for this suite.
Jan  2 17:30:20.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:30:20.835: INFO: namespace: e2e-tests-kubectl-zt5bd, resource: bindings, ignored listing per whitelist
Jan  2 17:30:20.978: INFO: namespace e2e-tests-kubectl-zt5bd deletion completed in 8.267385997s

• [SLOW TEST:20.029 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:30:20.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 17:30:21.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:21.669: INFO: stderr: ""
Jan  2 17:30:21.669: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 17:30:21.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:21.960: INFO: stderr: ""
Jan  2 17:30:21.961: INFO: stdout: "update-demo-nautilus-5kkc8 update-demo-nautilus-czl7t "
Jan  2 17:30:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5kkc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:22.219: INFO: stderr: ""
Jan  2 17:30:22.219: INFO: stdout: ""
Jan  2 17:30:22.219: INFO: update-demo-nautilus-5kkc8 is created but not running
Jan  2 17:30:27.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:27.483: INFO: stderr: ""
Jan  2 17:30:27.483: INFO: stdout: "update-demo-nautilus-5kkc8 update-demo-nautilus-czl7t "
Jan  2 17:30:27.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5kkc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:27.704: INFO: stderr: ""
Jan  2 17:30:27.704: INFO: stdout: ""
Jan  2 17:30:27.704: INFO: update-demo-nautilus-5kkc8 is created but not running
Jan  2 17:30:32.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:32.904: INFO: stderr: ""
Jan  2 17:30:32.904: INFO: stdout: "update-demo-nautilus-5kkc8 update-demo-nautilus-czl7t "
Jan  2 17:30:32.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5kkc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:33.224: INFO: stderr: ""
Jan  2 17:30:33.224: INFO: stdout: ""
Jan  2 17:30:33.224: INFO: update-demo-nautilus-5kkc8 is created but not running
Jan  2 17:30:38.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:38.459: INFO: stderr: ""
Jan  2 17:30:38.460: INFO: stdout: "update-demo-nautilus-5kkc8 update-demo-nautilus-czl7t "
Jan  2 17:30:38.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5kkc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:38.698: INFO: stderr: ""
Jan  2 17:30:38.698: INFO: stdout: "true"
Jan  2 17:30:38.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5kkc8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:38.810: INFO: stderr: ""
Jan  2 17:30:38.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 17:30:38.810: INFO: validating pod update-demo-nautilus-5kkc8
Jan  2 17:30:38.823: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 17:30:38.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 17:30:38.823: INFO: update-demo-nautilus-5kkc8 is verified up and running
Jan  2 17:30:38.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czl7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:38.961: INFO: stderr: ""
Jan  2 17:30:38.961: INFO: stdout: "true"
Jan  2 17:30:38.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czl7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:39.065: INFO: stderr: ""
Jan  2 17:30:39.065: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 17:30:39.065: INFO: validating pod update-demo-nautilus-czl7t
Jan  2 17:30:39.082: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 17:30:39.082: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 17:30:39.082: INFO: update-demo-nautilus-czl7t is verified up and running
STEP: using delete to clean up resources
Jan  2 17:30:39.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:39.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:30:39.197: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 17:30:39.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-fk2fj'
Jan  2 17:30:39.394: INFO: stderr: "No resources found.\n"
Jan  2 17:30:39.394: INFO: stdout: ""
Jan  2 17:30:39.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-fk2fj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 17:30:39.590: INFO: stderr: ""
Jan  2 17:30:39.591: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:30:39.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fk2fj" for this suite.
Jan  2 17:31:03.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:31:03.738: INFO: namespace: e2e-tests-kubectl-fk2fj, resource: bindings, ignored listing per whitelist
Jan  2 17:31:03.888: INFO: namespace e2e-tests-kubectl-fk2fj deletion completed in 24.262727553s

• [SLOW TEST:42.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:31:03.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-vftbq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vftbq to expose endpoints map[]
Jan  2 17:31:04.467: INFO: Get endpoints failed (124.604493ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  2 17:31:05.483: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vftbq exposes endpoints map[] (1.139953352s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-vftbq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vftbq to expose endpoints map[pod1:[80]]
Jan  2 17:31:11.119: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.603239581s elapsed, will retry)
Jan  2 17:31:15.759: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vftbq exposes endpoints map[pod1:[80]] (10.242424463s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-vftbq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vftbq to expose endpoints map[pod1:[80] pod2:[80]]
Jan  2 17:31:21.097: INFO: Unexpected endpoints: found map[a7a8d982-2d85-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.31118076s elapsed, will retry)
Jan  2 17:31:26.721: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vftbq exposes endpoints map[pod1:[80] pod2:[80]] (10.935862539s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-vftbq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vftbq to expose endpoints map[pod2:[80]]
Jan  2 17:31:27.843: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vftbq exposes endpoints map[pod2:[80]] (1.110187392s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-vftbq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-vftbq to expose endpoints map[]
Jan  2 17:31:29.795: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-vftbq exposes endpoints map[] (1.754448374s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:31:30.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-vftbq" for this suite.
Jan  2 17:31:52.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:31:52.774: INFO: namespace: e2e-tests-services-vftbq, resource: bindings, ignored listing per whitelist
Jan  2 17:31:52.802: INFO: namespace e2e-tests-services-vftbq deletion completed in 22.485320534s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.913 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:31:52.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:31:52.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-lrqqc" to be "success or failure"
Jan  2 17:31:53.001: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.672174ms
Jan  2 17:31:55.022: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033434492s
Jan  2 17:31:57.039: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050911902s
Jan  2 17:32:00.009: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.020251548s
Jan  2 17:32:02.355: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.36634948s
Jan  2 17:32:04.374: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.385940061s
STEP: Saw pod success
Jan  2 17:32:04.375: INFO: Pod "downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:32:04.381: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:32:04.692: INFO: Waiting for pod downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005 to disappear
Jan  2 17:32:04.715: INFO: Pod downwardapi-volume-c3f5a93c-2d85-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:32:04.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lrqqc" for this suite.
Jan  2 17:32:10.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:32:10.970: INFO: namespace: e2e-tests-downward-api-lrqqc, resource: bindings, ignored listing per whitelist
Jan  2 17:32:11.002: INFO: namespace e2e-tests-downward-api-lrqqc deletion completed in 6.254746723s

• [SLOW TEST:18.199 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:32:11.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 17:32:11.148: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:32:30.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-b86vs" for this suite.
Jan  2 17:32:36.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:32:37.000: INFO: namespace: e2e-tests-init-container-b86vs, resource: bindings, ignored listing per whitelist
Jan  2 17:32:37.108: INFO: namespace e2e-tests-init-container-b86vs deletion completed in 6.45080561s

• [SLOW TEST:26.106 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:32:37.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan  2 17:32:47.340: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-de55c637-2d85-11ea-b611-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-4mm2l", SelfLink:"/api/v1/namespaces/e2e-tests-pods-4mm2l/pods/pod-submit-remove-de55c637-2d85-11ea-b611-0242ac110005", UID:"de5d15df-2d85-11ea-a994-fa163e34d433", ResourceVersion:"16937483", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713583157, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"228514163"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-454d4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024d7e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-454d4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fffb28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001bfa9c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fffb60)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fffb80)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001fffb88), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001fffb8c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713583157, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713583166, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713583166, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713583157, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001b70460), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001b70480), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://3b5e428f4bc6397be0f7fc8cbb69010a2ceebb7bc623e9490aade208b0834a3d"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:33:02.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4mm2l" for this suite.
Jan  2 17:33:08.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:33:08.984: INFO: namespace: e2e-tests-pods-4mm2l, resource: bindings, ignored listing per whitelist
Jan  2 17:33:09.121: INFO: namespace e2e-tests-pods-4mm2l deletion completed in 6.416224729s

• [SLOW TEST:32.013 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:33:09.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0102 17:33:22.142823       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 17:33:22.143: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:33:22.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-sb8dj" for this suite.
Jan  2 17:33:41.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:33:41.616: INFO: namespace: e2e-tests-gc-sb8dj, resource: bindings, ignored listing per whitelist
Jan  2 17:33:41.633: INFO: namespace e2e-tests-gc-sb8dj deletion completed in 19.468705329s

• [SLOW TEST:32.509 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:33:41.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:33:44.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-m9wwx" to be "success or failure"
Jan  2 17:33:45.514: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 565.911228ms
Jan  2 17:33:47.611: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662967991s
Jan  2 17:33:49.699: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.751164383s
Jan  2 17:33:52.024: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075642979s
Jan  2 17:33:54.053: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.104706111s
Jan  2 17:33:57.561: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.613545998s
Jan  2 17:33:59.586: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.638154528s
Jan  2 17:34:01.605: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.65721659s
Jan  2 17:34:03.641: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.69267601s
STEP: Saw pod success
Jan  2 17:34:03.641: INFO: Pod "downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:34:03.695: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:34:04.186: INFO: Waiting for pod downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005 to disappear
Jan  2 17:34:04.198: INFO: Pod downwardapi-volume-051f5695-2d86-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:34:04.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m9wwx" for this suite.
Jan  2 17:34:10.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:34:10.396: INFO: namespace: e2e-tests-projected-m9wwx, resource: bindings, ignored listing per whitelist
Jan  2 17:34:10.489: INFO: namespace e2e-tests-projected-m9wwx deletion completed in 6.279262681s

• [SLOW TEST:28.855 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:34:10.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  2 17:34:10.883: INFO: Waiting up to 5m0s for pod "pod-161d5023-2d86-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-7h5tm" to be "success or failure"
Jan  2 17:34:10.921: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.60595ms
Jan  2 17:34:12.937: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053633815s
Jan  2 17:34:14.961: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077655403s
Jan  2 17:34:17.503: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618737609s
Jan  2 17:34:19.517: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633483297s
Jan  2 17:34:21.647: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.763188279s
STEP: Saw pod success
Jan  2 17:34:21.647: INFO: Pod "pod-161d5023-2d86-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:34:21.652: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-161d5023-2d86-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:34:22.687: INFO: Waiting for pod pod-161d5023-2d86-11ea-b611-0242ac110005 to disappear
Jan  2 17:34:23.181: INFO: Pod pod-161d5023-2d86-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:34:23.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7h5tm" for this suite.
Jan  2 17:34:29.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:34:29.463: INFO: namespace: e2e-tests-emptydir-7h5tm, resource: bindings, ignored listing per whitelist
Jan  2 17:34:29.617: INFO: namespace e2e-tests-emptydir-7h5tm deletion completed in 6.40954249s

• [SLOW TEST:19.127 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:34:29.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005
Jan  2 17:34:29.926: INFO: Pod name my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005: Found 0 pods out of 1
Jan  2 17:34:35.615: INFO: Pod name my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005: Found 1 pods out of 1
Jan  2 17:34:35.616: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005" are running
Jan  2 17:34:40.617: INFO: Pod "my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005-rrd5l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 17:34:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 17:34:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 17:34:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 17:34:29 +0000 UTC Reason: Message:}])
Jan  2 17:34:40.617: INFO: Trying to dial the pod
Jan  2 17:34:45.685: INFO: Controller my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005: Got expected result from replica 1 [my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005-rrd5l]: "my-hostname-basic-217d1053-2d86-11ea-b611-0242ac110005-rrd5l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:34:45.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-n9fhj" for this suite.
Jan  2 17:34:53.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:34:54.159: INFO: namespace: e2e-tests-replication-controller-n9fhj, resource: bindings, ignored listing per whitelist
Jan  2 17:34:54.170: INFO: namespace e2e-tests-replication-controller-n9fhj deletion completed in 8.474409456s

• [SLOW TEST:24.553 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:34:54.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0102 17:35:25.257196       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 17:35:25.257: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:35:25.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-svxxk" for this suite.
Jan  2 17:35:35.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:35:36.332: INFO: namespace: e2e-tests-gc-svxxk, resource: bindings, ignored listing per whitelist
Jan  2 17:35:36.513: INFO: namespace e2e-tests-gc-svxxk deletion completed in 11.247921606s

• [SLOW TEST:42.342 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:35:36.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  2 17:35:37.451: INFO: Waiting up to 5m0s for pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-p8b69" to be "success or failure"
Jan  2 17:35:37.528: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.569003ms
Jan  2 17:35:39.621: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170392669s
Jan  2 17:35:41.666: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214772582s
Jan  2 17:35:43.726: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275185613s
Jan  2 17:35:45.754: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302842257s
Jan  2 17:35:47.794: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.343159472s
STEP: Saw pod success
Jan  2 17:35:47.795: INFO: Pod "pod-49ba09d3-2d86-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:35:47.807: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-49ba09d3-2d86-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:35:47.999: INFO: Waiting for pod pod-49ba09d3-2d86-11ea-b611-0242ac110005 to disappear
Jan  2 17:35:48.028: INFO: Pod pod-49ba09d3-2d86-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:35:48.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p8b69" for this suite.
Jan  2 17:35:56.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:35:56.163: INFO: namespace: e2e-tests-emptydir-p8b69, resource: bindings, ignored listing per whitelist
Jan  2 17:35:56.264: INFO: namespace e2e-tests-emptydir-p8b69 deletion completed in 8.215428938s

• [SLOW TEST:19.749 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:35:56.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-5514fb91-2d86-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 17:35:56.493: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-7p8lz" to be "success or failure"
Jan  2 17:35:56.599: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.528771ms
Jan  2 17:35:58.745: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251136272s
Jan  2 17:36:00.761: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267741681s
Jan  2 17:36:03.035: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.541116414s
Jan  2 17:36:05.088: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594740928s
Jan  2 17:36:07.112: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618536304s
Jan  2 17:36:09.330: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.836489309s
STEP: Saw pod success
Jan  2 17:36:09.330: INFO: Pod "pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:36:09.354: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 17:36:09.644: INFO: Waiting for pod pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005 to disappear
Jan  2 17:36:09.804: INFO: Pod pod-projected-configmaps-551688f6-2d86-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:36:09.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7p8lz" for this suite.
Jan  2 17:36:15.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:36:16.059: INFO: namespace: e2e-tests-projected-7p8lz, resource: bindings, ignored listing per whitelist
Jan  2 17:36:16.108: INFO: namespace e2e-tests-projected-7p8lz deletion completed in 6.261276646s

• [SLOW TEST:19.843 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:36:16.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:36:16.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-gqx4j" to be "success or failure"
Jan  2 17:36:16.493: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.935544ms
Jan  2 17:36:18.571: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095061689s
Jan  2 17:36:20.593: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116978578s
Jan  2 17:36:22.712: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235330188s
Jan  2 17:36:24.738: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26195254s
Jan  2 17:36:26.768: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.292239413s
STEP: Saw pod success
Jan  2 17:36:26.769: INFO: Pod "downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:36:26.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:36:27.141: INFO: Waiting for pod downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005 to disappear
Jan  2 17:36:27.294: INFO: Pod downwardapi-volume-60f1056e-2d86-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:36:27.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gqx4j" for this suite.
Jan  2 17:36:33.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:36:33.447: INFO: namespace: e2e-tests-projected-gqx4j, resource: bindings, ignored listing per whitelist
Jan  2 17:36:33.652: INFO: namespace e2e-tests-projected-gqx4j deletion completed in 6.344046573s

• [SLOW TEST:17.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:36:33.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  2 17:36:44.678: INFO: Successfully updated pod "pod-update-6b76a378-2d86-11ea-b611-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  2 17:36:44.792: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:36:44.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qcf6z" for this suite.
Jan  2 17:37:08.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:37:09.046: INFO: namespace: e2e-tests-pods-qcf6z, resource: bindings, ignored listing per whitelist
Jan  2 17:37:09.049: INFO: namespace e2e-tests-pods-qcf6z deletion completed in 24.25110525s

• [SLOW TEST:35.396 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:37:09.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  2 17:37:09.315: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  2 17:37:09.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:09.733: INFO: stderr: ""
Jan  2 17:37:09.734: INFO: stdout: "service/redis-slave created\n"
Jan  2 17:37:09.735: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  2 17:37:09.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:10.320: INFO: stderr: ""
Jan  2 17:37:10.320: INFO: stdout: "service/redis-master created\n"
Jan  2 17:37:10.321: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  2 17:37:10.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:10.865: INFO: stderr: ""
Jan  2 17:37:10.865: INFO: stdout: "service/frontend created\n"
Jan  2 17:37:10.867: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  2 17:37:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:11.431: INFO: stderr: ""
Jan  2 17:37:11.431: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  2 17:37:11.431: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  2 17:37:11.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:12.089: INFO: stderr: ""
Jan  2 17:37:12.089: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  2 17:37:12.091: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  2 17:37:12.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:14.539: INFO: stderr: ""
Jan  2 17:37:14.540: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  2 17:37:14.540: INFO: Waiting for all frontend pods to be Running.
Jan  2 17:37:49.596: INFO: Waiting for frontend to serve content.
Jan  2 17:37:49.859: INFO: Trying to add a new entry to the guestbook.
Jan  2 17:37:49.909: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  2 17:37:49.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:50.256: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:50.256: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 17:37:50.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:50.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:50.697: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 17:37:50.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:50.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:50.944: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 17:37:50.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:51.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:51.085: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 17:37:51.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:51.306: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:51.306: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  2 17:37:51.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-66vvn'
Jan  2 17:37:51.544: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:37:51.544: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:37:51.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-66vvn" for this suite.
Jan  2 17:38:45.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:38:46.033: INFO: namespace: e2e-tests-kubectl-66vvn, resource: bindings, ignored listing per whitelist
Jan  2 17:38:46.046: INFO: namespace e2e-tests-kubectl-66vvn deletion completed in 54.477622166s

• [SLOW TEST:96.996 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:38:46.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rcvdd
Jan  2 17:38:56.312: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rcvdd
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 17:38:56.316: INFO: Initial restart count of pod liveness-exec is 0
Jan  2 17:39:51.088: INFO: Restart count of pod e2e-tests-container-probe-rcvdd/liveness-exec is now 1 (54.771674412s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:39:51.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rcvdd" for this suite.
Jan  2 17:39:59.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:39:59.495: INFO: namespace: e2e-tests-container-probe-rcvdd, resource: bindings, ignored listing per whitelist
Jan  2 17:39:59.628: INFO: namespace e2e-tests-container-probe-rcvdd deletion completed in 8.23437302s

• [SLOW TEST:73.582 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:39:59.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:40:12.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-gsnd5" for this suite.
Jan  2 17:40:18.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:40:18.218: INFO: namespace: e2e-tests-kubelet-test-gsnd5, resource: bindings, ignored listing per whitelist
Jan  2 17:40:18.394: INFO: namespace e2e-tests-kubelet-test-gsnd5 deletion completed in 6.323830904s

• [SLOW TEST:18.766 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:40:18.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0102 17:41:00.033020       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 17:41:00.033: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:41:00.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-stj9k" for this suite.
Jan  2 17:41:12.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:41:12.383: INFO: namespace: e2e-tests-gc-stj9k, resource: bindings, ignored listing per whitelist
Jan  2 17:41:12.455: INFO: namespace e2e-tests-gc-stj9k deletion completed in 12.41275276s

• [SLOW TEST:54.060 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:41:12.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-13783eb6-2d87-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 17:41:16.761: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-k6gp8" to be "success or failure"
Jan  2 17:41:17.626: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 864.188874ms
Jan  2 17:41:20.964: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202584669s
Jan  2 17:41:23.207: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445852475s
Jan  2 17:41:25.224: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46236335s
Jan  2 17:41:27.238: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476704413s
Jan  2 17:41:29.254: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.492831371s
Jan  2 17:41:31.313: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.551823671s
Jan  2 17:41:33.607: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.845689209s
Jan  2 17:41:36.026: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 19.264167908s
Jan  2 17:41:38.043: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.281146784s
STEP: Saw pod success
Jan  2 17:41:38.043: INFO: Pod "pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:41:38.049: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 17:41:38.806: INFO: Waiting for pod pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:41:38.832: INFO: Pod pod-projected-secrets-1382a6ea-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:41:38.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k6gp8" for this suite.
Jan  2 17:41:44.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:41:45.112: INFO: namespace: e2e-tests-projected-k6gp8, resource: bindings, ignored listing per whitelist
Jan  2 17:41:45.201: INFO: namespace e2e-tests-projected-k6gp8 deletion completed in 6.359980486s

• [SLOW TEST:32.745 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:41:45.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-4j4k
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 17:41:45.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4j4k" in namespace "e2e-tests-subpath-wkhwt" to be "success or failure"
Jan  2 17:41:45.468: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.146492ms
Jan  2 17:41:47.484: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025947207s
Jan  2 17:41:49.515: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056552544s
Jan  2 17:41:51.951: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493240285s
Jan  2 17:41:53.987: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529296362s
Jan  2 17:41:56.000: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542377421s
Jan  2 17:41:58.017: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.558950755s
Jan  2 17:42:00.109: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.650675322s
Jan  2 17:42:02.120: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 16.661825201s
Jan  2 17:42:04.135: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 18.67727574s
Jan  2 17:42:06.152: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 20.694311008s
Jan  2 17:42:08.167: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 22.708468901s
Jan  2 17:42:10.190: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 24.732050093s
Jan  2 17:42:12.245: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 26.78649966s
Jan  2 17:42:14.264: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 28.806245473s
Jan  2 17:42:16.287: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 30.82876155s
Jan  2 17:42:18.304: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Running", Reason="", readiness=false. Elapsed: 32.845439471s
Jan  2 17:42:20.342: INFO: Pod "pod-subpath-test-configmap-4j4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.883955883s
STEP: Saw pod success
Jan  2 17:42:20.342: INFO: Pod "pod-subpath-test-configmap-4j4k" satisfied condition "success or failure"
Jan  2 17:42:20.354: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-4j4k container test-container-subpath-configmap-4j4k: 
STEP: delete the pod
Jan  2 17:42:20.724: INFO: Waiting for pod pod-subpath-test-configmap-4j4k to disappear
Jan  2 17:42:20.744: INFO: Pod pod-subpath-test-configmap-4j4k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4j4k
Jan  2 17:42:20.744: INFO: Deleting pod "pod-subpath-test-configmap-4j4k" in namespace "e2e-tests-subpath-wkhwt"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:42:20.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wkhwt" for this suite.
Jan  2 17:42:28.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:42:28.883: INFO: namespace: e2e-tests-subpath-wkhwt, resource: bindings, ignored listing per whitelist
Jan  2 17:42:28.948: INFO: namespace e2e-tests-subpath-wkhwt deletion completed in 8.179506222s

• [SLOW TEST:43.747 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:42:28.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 17:42:29.153: INFO: Waiting up to 5m0s for pod "pod-3f24a206-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-6mv2j" to be "success or failure"
Jan  2 17:42:29.253: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.658137ms
Jan  2 17:42:31.650: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497310384s
Jan  2 17:42:33.669: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.51585373s
Jan  2 17:42:35.886: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.733104713s
Jan  2 17:42:37.906: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752882744s
Jan  2 17:42:40.164: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.011405222s
STEP: Saw pod success
Jan  2 17:42:40.165: INFO: Pod "pod-3f24a206-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:42:40.416: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3f24a206-2d87-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 17:42:40.624: INFO: Waiting for pod pod-3f24a206-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:42:40.641: INFO: Pod pod-3f24a206-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:42:40.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6mv2j" for this suite.
Jan  2 17:42:46.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:42:46.921: INFO: namespace: e2e-tests-emptydir-6mv2j, resource: bindings, ignored listing per whitelist
Jan  2 17:42:46.964: INFO: namespace e2e-tests-emptydir-6mv2j deletion completed in 6.308564739s

• [SLOW TEST:18.015 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:42:46.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 17:42:47.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-96f49" to be "success or failure"
Jan  2 17:42:47.172: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.799254ms
Jan  2 17:42:49.186: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051077002s
Jan  2 17:42:51.207: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07196172s
Jan  2 17:42:53.582: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446976087s
Jan  2 17:42:55.603: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467149246s
Jan  2 17:42:57.620: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.484623159s
Jan  2 17:43:00.137: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.00127224s
STEP: Saw pod success
Jan  2 17:43:00.137: INFO: Pod "downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:43:00.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 17:43:00.365: INFO: Waiting for pod downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:43:00.424: INFO: Pod downwardapi-volume-49db90cf-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:43:00.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-96f49" for this suite.
Jan  2 17:43:06.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:43:06.593: INFO: namespace: e2e-tests-downward-api-96f49, resource: bindings, ignored listing per whitelist
Jan  2 17:43:06.804: INFO: namespace e2e-tests-downward-api-96f49 deletion completed in 6.369916375s

• [SLOW TEST:19.840 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:43:06.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-55b8da4e-2d87-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 17:43:07.075: INFO: Waiting up to 5m0s for pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-5txzw" to be "success or failure"
Jan  2 17:43:07.181: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.580802ms
Jan  2 17:43:09.202: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127512792s
Jan  2 17:43:11.221: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146165022s
Jan  2 17:43:13.431: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356567763s
Jan  2 17:43:15.459: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.383601354s
Jan  2 17:43:17.476: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401451663s
Jan  2 17:43:19.588: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.512957603s
STEP: Saw pod success
Jan  2 17:43:19.588: INFO: Pod "pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:43:19.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 17:43:19.984: INFO: Waiting for pod pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:43:20.048: INFO: Pod pod-secrets-55ba262d-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:43:20.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5txzw" for this suite.
Jan  2 17:43:26.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:43:26.259: INFO: namespace: e2e-tests-secrets-5txzw, resource: bindings, ignored listing per whitelist
Jan  2 17:43:26.265: INFO: namespace e2e-tests-secrets-5txzw deletion completed in 6.208620119s

• [SLOW TEST:19.460 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:43:26.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-616a722a-2d87-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 17:43:26.811: INFO: Waiting up to 5m0s for pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-9tp5x" to be "success or failure"
Jan  2 17:43:26.838: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.382319ms
Jan  2 17:43:28.897: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085899626s
Jan  2 17:43:30.911: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09972245s
Jan  2 17:43:33.284: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472375959s
Jan  2 17:43:35.322: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510839093s
Jan  2 17:43:37.337: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.526294716s
STEP: Saw pod success
Jan  2 17:43:37.338: INFO: Pod "pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:43:37.342: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 17:43:37.426: INFO: Waiting for pod pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:43:37.448: INFO: Pod pod-secrets-616fbacf-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:43:37.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9tp5x" for this suite.
Jan  2 17:43:45.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:43:45.278: INFO: namespace: e2e-tests-secrets-9tp5x, resource: bindings, ignored listing per whitelist
Jan  2 17:43:45.327: INFO: namespace e2e-tests-secrets-9tp5x deletion completed in 7.87157666s

• [SLOW TEST:19.062 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:43:45.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  2 17:43:45.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8dfdw'
Jan  2 17:43:48.450: INFO: stderr: ""
Jan  2 17:43:48.450: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  2 17:43:49.856: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:49.856: INFO: Found 0 / 1
Jan  2 17:43:50.502: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:50.502: INFO: Found 0 / 1
Jan  2 17:43:51.468: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:51.468: INFO: Found 0 / 1
Jan  2 17:43:52.471: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:52.472: INFO: Found 0 / 1
Jan  2 17:43:53.682: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:53.683: INFO: Found 0 / 1
Jan  2 17:43:54.536: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:54.537: INFO: Found 0 / 1
Jan  2 17:43:55.480: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:55.480: INFO: Found 0 / 1
Jan  2 17:43:56.536: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:56.536: INFO: Found 0 / 1
Jan  2 17:43:57.476: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:57.476: INFO: Found 1 / 1
Jan  2 17:43:57.476: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 17:43:57.491: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 17:43:57.491: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  2 17:43:57.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw'
Jan  2 17:43:57.706: INFO: stderr: ""
Jan  2 17:43:57.707: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 17:43:55.888 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 17:43:55.889 # Server started, Redis version 3.2.12\n1:M 02 Jan 17:43:55.889 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 17:43:55.889 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  2 17:43:57.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw --tail=1'
Jan  2 17:43:57.941: INFO: stderr: ""
Jan  2 17:43:57.941: INFO: stdout: "1:M 02 Jan 17:43:55.889 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  2 17:43:57.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw --limit-bytes=1'
Jan  2 17:43:58.092: INFO: stderr: ""
Jan  2 17:43:58.093: INFO: stdout: " "
STEP: exposing timestamps
Jan  2 17:43:58.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw --tail=1 --timestamps'
Jan  2 17:43:58.214: INFO: stderr: ""
Jan  2 17:43:58.214: INFO: stdout: "2020-01-02T17:43:55.890795154Z 1:M 02 Jan 17:43:55.889 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  2 17:44:00.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw --since=1s'
Jan  2 17:44:00.953: INFO: stderr: ""
Jan  2 17:44:00.953: INFO: stdout: ""
Jan  2 17:44:00.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q9cjl redis-master --namespace=e2e-tests-kubectl-8dfdw --since=24h'
Jan  2 17:44:01.167: INFO: stderr: ""
Jan  2 17:44:01.167: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 17:43:55.888 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 17:43:55.889 # Server started, Redis version 3.2.12\n1:M 02 Jan 17:43:55.889 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 17:43:55.889 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  2 17:44:01.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8dfdw'
Jan  2 17:44:01.296: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 17:44:01.296: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  2 17:44:01.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8dfdw'
Jan  2 17:44:01.448: INFO: stderr: "No resources found.\n"
Jan  2 17:44:01.448: INFO: stdout: ""
Jan  2 17:44:01.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8dfdw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 17:44:01.575: INFO: stderr: ""
Jan  2 17:44:01.575: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:44:01.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8dfdw" for this suite.
Jan  2 17:44:25.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:44:25.774: INFO: namespace: e2e-tests-kubectl-8dfdw, resource: bindings, ignored listing per whitelist
Jan  2 17:44:25.784: INFO: namespace e2e-tests-kubectl-8dfdw deletion completed in 24.199270284s

• [SLOW TEST:40.456 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:44:25.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nfbt7
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 17:44:26.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 17:45:00.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nfbt7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 17:45:00.234: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 17:45:00.857: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:45:00.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nfbt7" for this suite.
Jan  2 17:45:24.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:45:25.040: INFO: namespace: e2e-tests-pod-network-test-nfbt7, resource: bindings, ignored listing per whitelist
Jan  2 17:45:25.059: INFO: namespace e2e-tests-pod-network-test-nfbt7 deletion completed in 24.17668296s

• [SLOW TEST:59.274 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:45:25.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 17:45:25.222: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:45:42.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7hkfq" for this suite.
Jan  2 17:45:48.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:45:48.614: INFO: namespace: e2e-tests-init-container-7hkfq, resource: bindings, ignored listing per whitelist
Jan  2 17:45:48.795: INFO: namespace e2e-tests-init-container-7hkfq deletion completed in 6.4819605s

• [SLOW TEST:23.737 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:45:48.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  2 17:45:59.145: INFO: Pod pod-hostip-b64e1f36-2d87-11ea-b611-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:45:59.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mpnzk" for this suite.
Jan  2 17:46:21.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:46:21.306: INFO: namespace: e2e-tests-pods-mpnzk, resource: bindings, ignored listing per whitelist
Jan  2 17:46:21.327: INFO: namespace e2e-tests-pods-mpnzk deletion completed in 22.171861003s

• [SLOW TEST:32.531 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:46:21.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-rcvkv/secret-test-c9a65e9f-2d87-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 17:46:21.535: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-rcvkv" to be "success or failure"
Jan  2 17:46:21.554: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.910534ms
Jan  2 17:46:23.579: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043811823s
Jan  2 17:46:25.605: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069594063s
Jan  2 17:46:28.004: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469228811s
Jan  2 17:46:30.017: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48164376s
Jan  2 17:46:32.073: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.537636274s
STEP: Saw pod success
Jan  2 17:46:32.073: INFO: Pod "pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 17:46:32.098: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005 container env-test: 
STEP: delete the pod
Jan  2 17:46:32.459: INFO: Waiting for pod pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005 to disappear
Jan  2 17:46:32.477: INFO: Pod pod-configmaps-c9a6ed3f-2d87-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:46:32.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rcvkv" for this suite.
Jan  2 17:46:38.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:46:38.768: INFO: namespace: e2e-tests-secrets-rcvkv, resource: bindings, ignored listing per whitelist
Jan  2 17:46:38.774: INFO: namespace e2e-tests-secrets-rcvkv deletion completed in 6.284812342s

• [SLOW TEST:17.447 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:46:38.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:46:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jqmcf" for this suite.
Jan  2 17:46:45.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:46:45.367: INFO: namespace: e2e-tests-kubelet-test-jqmcf, resource: bindings, ignored listing per whitelist
Jan  2 17:46:45.392: INFO: namespace e2e-tests-kubelet-test-jqmcf deletion completed in 6.313456851s

• [SLOW TEST:6.618 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:46:45.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ctdhx
Jan  2 17:46:53.685: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ctdhx
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 17:46:53.727: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:50:55.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ctdhx" for this suite.
Jan  2 17:51:01.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:51:01.262: INFO: namespace: e2e-tests-container-probe-ctdhx, resource: bindings, ignored listing per whitelist
Jan  2 17:51:01.397: INFO: namespace e2e-tests-container-probe-ctdhx deletion completed in 6.236395708s

• [SLOW TEST:256.004 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:51:01.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 17:51:14.278: INFO: Successfully updated pod "annotationupdate70998197-2d88-11ea-b611-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:51:16.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nnkjd" for this suite.
Jan  2 17:51:40.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:51:40.960: INFO: namespace: e2e-tests-downward-api-nnkjd, resource: bindings, ignored listing per whitelist
Jan  2 17:51:41.113: INFO: namespace e2e-tests-downward-api-nnkjd deletion completed in 24.575802727s

• [SLOW TEST:39.716 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:51:41.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  2 17:51:41.318: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-7vbxq,SelfLink:/api/v1/namespaces/e2e-tests-watch-7vbxq/configmaps/e2e-watch-test-resource-version,UID:8835d343-2d88-11ea-a994-fa163e34d433,ResourceVersion:16939997,Generation:0,CreationTimestamp:2020-01-02 17:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 17:51:41.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-7vbxq,SelfLink:/api/v1/namespaces/e2e-tests-watch-7vbxq/configmaps/e2e-watch-test-resource-version,UID:8835d343-2d88-11ea-a994-fa163e34d433,ResourceVersion:16939998,Generation:0,CreationTimestamp:2020-01-02 17:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:51:41.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7vbxq" for this suite.
Jan  2 17:51:47.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:51:47.471: INFO: namespace: e2e-tests-watch-7vbxq, resource: bindings, ignored listing per whitelist
Jan  2 17:51:47.519: INFO: namespace e2e-tests-watch-7vbxq deletion completed in 6.196536222s

• [SLOW TEST:6.406 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:51:47.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 17:51:47.756: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  2 17:51:47.765: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4rzl4/daemonsets","resourceVersion":"16940015"},"items":null}

Jan  2 17:51:47.767: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4rzl4/pods","resourceVersion":"16940015"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:51:47.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4rzl4" for this suite.
Jan  2 17:51:53.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:51:53.981: INFO: namespace: e2e-tests-daemonsets-4rzl4, resource: bindings, ignored listing per whitelist
Jan  2 17:51:54.190: INFO: namespace e2e-tests-daemonsets-4rzl4 deletion completed in 6.411909997s

S [SKIPPING] [6.670 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  2 17:51:47.756: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:51:54.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w8jcd
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  2 17:51:54.711: INFO: Found 0 stateful pods, waiting for 3
Jan  2 17:52:04.739: INFO: Found 2 stateful pods, waiting for 3
Jan  2 17:52:14.804: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:52:14.804: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:52:14.804: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 17:52:24.726: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:52:24.727: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:52:24.727: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:52:24.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8jcd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:52:25.568: INFO: stderr: ""
Jan  2 17:52:25.569: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:52:25.569: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  2 17:52:35.682: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  2 17:52:45.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8jcd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:52:46.721: INFO: stderr: ""
Jan  2 17:52:46.722: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:52:46.722: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:52:47.017: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:52:47.017: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:52:47.017: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:52:47.017: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:52:57.064: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:52:57.064: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:52:57.064: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:52:57.064: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:07.062: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:53:07.062: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:07.062: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:17.060: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:53:17.060: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:17.060: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:27.034: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:53:27.034: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:37.045: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:53:37.045: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  2 17:53:47.035: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  2 17:53:57.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8jcd ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:53:57.899: INFO: stderr: ""
Jan  2 17:53:57.900: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:53:57.900: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 17:53:58.035: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  2 17:54:08.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8jcd ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:54:08.893: INFO: stderr: ""
Jan  2 17:54:08.893: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:54:08.893: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:54:08.931: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:08.931: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:08.931: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:08.931: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:18.970: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:18.971: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:18.971: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:18.971: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:30.592: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:30.592: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:30.592: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:38.963: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:38.963: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:38.963: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:49.044: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:49.044: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:54:58.998: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
Jan  2 17:54:58.998: INFO: Waiting for Pod e2e-tests-statefulset-w8jcd/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  2 17:55:08.971: INFO: Waiting for StatefulSet e2e-tests-statefulset-w8jcd/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 17:55:18.985: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w8jcd
Jan  2 17:55:19.002: INFO: Scaling statefulset ss2 to 0
Jan  2 17:55:59.058: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 17:55:59.065: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:55:59.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w8jcd" for this suite.
Jan  2 17:56:07.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:56:07.290: INFO: namespace: e2e-tests-statefulset-w8jcd, resource: bindings, ignored listing per whitelist
Jan  2 17:56:07.329: INFO: namespace e2e-tests-statefulset-w8jcd deletion completed in 8.201761063s

• [SLOW TEST:253.139 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:56:07.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  2 17:56:20.725: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:56:21.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ms8xf" for this suite.
Jan  2 17:56:46.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:56:46.438: INFO: namespace: e2e-tests-replicaset-ms8xf, resource: bindings, ignored listing per whitelist
Jan  2 17:56:46.505: INFO: namespace e2e-tests-replicaset-ms8xf deletion completed in 24.673541438s

• [SLOW TEST:39.176 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:56:46.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-x5vpl
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-x5vpl
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-x5vpl
Jan  2 17:56:47.289: INFO: Found 0 stateful pods, waiting for 1
Jan  2 17:56:57.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 17:57:07.308: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  2 17:57:07.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:57:08.007: INFO: stderr: ""
Jan  2 17:57:08.007: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:57:08.008: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 17:57:08.028: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  2 17:57:18.074: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 17:57:18.074: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 17:57:18.141: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998809s
Jan  2 17:57:19.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983418189s
Jan  2 17:57:20.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970551651s
Jan  2 17:57:21.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.926664529s
Jan  2 17:57:22.290: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.851730539s
Jan  2 17:57:23.315: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.834031319s
Jan  2 17:57:24.336: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.808614684s
Jan  2 17:57:25.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.78771901s
Jan  2 17:57:26.521: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.766633614s
Jan  2 17:57:27.538: INFO: Verifying statefulset ss doesn't scale past 1 for another 602.963402ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-x5vpl
Jan  2 17:57:28.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:57:29.434: INFO: stderr: ""
Jan  2 17:57:29.434: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:57:29.434: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:57:29.483: INFO: Found 1 stateful pods, waiting for 3
Jan  2 17:57:39.499: INFO: Found 2 stateful pods, waiting for 3
Jan  2 17:57:49.506: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:57:49.506: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:57:49.506: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  2 17:57:59.502: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:57:59.502: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 17:57:59.502: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  2 17:57:59.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:58:00.264: INFO: stderr: ""
Jan  2 17:58:00.264: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:58:00.264: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 17:58:00.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:58:00.926: INFO: stderr: ""
Jan  2 17:58:00.926: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:58:00.926: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 17:58:00.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 17:58:01.312: INFO: stderr: ""
Jan  2 17:58:01.312: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 17:58:01.312: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 17:58:01.312: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 17:58:01.322: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  2 17:58:11.345: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 17:58:11.345: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 17:58:11.345: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 17:58:11.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999599s
Jan  2 17:58:12.400: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982958413s
Jan  2 17:58:13.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96110301s
Jan  2 17:58:14.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.94298876s
Jan  2 17:58:15.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.92739131s
Jan  2 17:58:16.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.882998345s
Jan  2 17:58:17.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.718118637s
Jan  2 17:58:18.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.704223845s
Jan  2 17:58:19.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.495320139s
Jan  2 17:58:20.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 474.109036ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-x5vpl
Jan  2 17:58:21.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:58:23.179: INFO: stderr: ""
Jan  2 17:58:23.180: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:58:23.180: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:58:23.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:58:23.530: INFO: stderr: ""
Jan  2 17:58:23.530: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:58:23.530: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:58:23.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-x5vpl ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 17:58:24.196: INFO: stderr: ""
Jan  2 17:58:24.196: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 17:58:24.196: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 17:58:24.196: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 17:58:54.313: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x5vpl
Jan  2 17:58:54.358: INFO: Scaling statefulset ss to 0
Jan  2 17:58:54.529: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 17:58:54.537: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:58:54.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-x5vpl" for this suite.
Jan  2 17:59:00.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:59:00.846: INFO: namespace: e2e-tests-statefulset-x5vpl, resource: bindings, ignored listing per whitelist
Jan  2 17:59:00.909: INFO: namespace e2e-tests-statefulset-x5vpl deletion completed in 6.319643754s

• [SLOW TEST:134.404 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:59:00.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 17:59:11.928: INFO: Successfully updated pod "labelsupdate8e7bd4bd-2d89-11ea-b611-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:59:16.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pk9jp" for this suite.
Jan  2 17:59:38.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:59:38.363: INFO: namespace: e2e-tests-projected-pk9jp, resource: bindings, ignored listing per whitelist
Jan  2 17:59:38.380: INFO: namespace e2e-tests-projected-pk9jp deletion completed in 22.276655704s

• [SLOW TEST:37.471 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:59:38.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 17:59:38.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 17:59:38.627: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 17:59:38.631: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 17:59:38.645: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 17:59:38.645: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 17:59:38.645: INFO: 	Container coredns ready: true, restart count 0
Jan  2 17:59:38.645: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 17:59:38.645: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 17:59:38.645: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 17:59:38.645: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 17:59:38.645: INFO: 	Container weave ready: true, restart count 0
Jan  2 17:59:38.645: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 17:59:38.645: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 17:59:38.645: INFO: 	Container coredns ready: true, restart count 0
Jan  2 17:59:38.645: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 17:59:38.645: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  2 17:59:38.782: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4db3d9c-2d89-11ea-b611-0242ac110005.15e623b4ca268bf3], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-mffdc/filler-pod-a4db3d9c-2d89-11ea-b611-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4db3d9c-2d89-11ea-b611-0242ac110005.15e623b61e201628], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4db3d9c-2d89-11ea-b611-0242ac110005.15e623b6dd97b239], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4db3d9c-2d89-11ea-b611-0242ac110005.15e623b70908f373], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e623b722d0441d], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 17:59:50.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-mffdc" for this suite.
Jan  2 17:59:58.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 17:59:58.682: INFO: namespace: e2e-tests-sched-pred-mffdc, resource: bindings, ignored listing per whitelist
Jan  2 17:59:58.929: INFO: namespace e2e-tests-sched-pred-mffdc deletion completed in 8.764905001s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.549 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 17:59:58.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:00:09.634: INFO: Waiting up to 5m0s for pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005" in namespace "e2e-tests-pods-chz4n" to be "success or failure"
Jan  2 18:00:09.771: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 136.482524ms
Jan  2 18:00:11.912: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27761746s
Jan  2 18:00:13.937: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302674977s
Jan  2 18:00:15.964: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329271462s
Jan  2 18:00:18.266: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631762152s
Jan  2 18:00:20.289: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654854444s
Jan  2 18:00:22.638: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.003877547s
STEP: Saw pod success
Jan  2 18:00:22.638: INFO: Pod "client-envvars-b73a09da-2d89-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:00:22.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-b73a09da-2d89-11ea-b611-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  2 18:00:22.911: INFO: Waiting for pod client-envvars-b73a09da-2d89-11ea-b611-0242ac110005 to disappear
Jan  2 18:00:22.967: INFO: Pod client-envvars-b73a09da-2d89-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:00:22.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-chz4n" for this suite.
Jan  2 18:01:17.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:01:17.108: INFO: namespace: e2e-tests-pods-chz4n, resource: bindings, ignored listing per whitelist
Jan  2 18:01:17.203: INFO: namespace e2e-tests-pods-chz4n deletion completed in 54.221147161s

• [SLOW TEST:78.273 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:01:17.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan  2 18:01:17.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  2 18:01:17.510: INFO: stderr: ""
Jan  2 18:01:17.510: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:01:17.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6f229" for this suite.
Jan  2 18:01:23.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:01:23.662: INFO: namespace: e2e-tests-kubectl-6f229, resource: bindings, ignored listing per whitelist
Jan  2 18:01:23.846: INFO: namespace e2e-tests-kubectl-6f229 deletion completed in 6.275817443s

• [SLOW TEST:6.642 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:01:23.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 18:01:24.218: INFO: Number of nodes with available pods: 0
Jan  2 18:01:24.218: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:25.236: INFO: Number of nodes with available pods: 0
Jan  2 18:01:25.236: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:26.789: INFO: Number of nodes with available pods: 0
Jan  2 18:01:26.789: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:27.933: INFO: Number of nodes with available pods: 0
Jan  2 18:01:27.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:28.273: INFO: Number of nodes with available pods: 0
Jan  2 18:01:28.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:29.253: INFO: Number of nodes with available pods: 0
Jan  2 18:01:29.253: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:30.255: INFO: Number of nodes with available pods: 0
Jan  2 18:01:30.255: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:31.413: INFO: Number of nodes with available pods: 0
Jan  2 18:01:31.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:32.284: INFO: Number of nodes with available pods: 0
Jan  2 18:01:32.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:33.244: INFO: Number of nodes with available pods: 0
Jan  2 18:01:33.244: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:34.242: INFO: Number of nodes with available pods: 0
Jan  2 18:01:34.242: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:01:35.249: INFO: Number of nodes with available pods: 1
Jan  2 18:01:35.249: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  2 18:01:35.372: INFO: Number of nodes with available pods: 1
Jan  2 18:01:35.372: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nck8v, will wait for the garbage collector to delete the pods
Jan  2 18:01:36.496: INFO: Deleting DaemonSet.extensions daemon-set took: 30.043904ms
Jan  2 18:01:37.397: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.958869ms
Jan  2 18:01:39.205: INFO: Number of nodes with available pods: 0
Jan  2 18:01:39.205: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 18:01:39.209: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nck8v/daemonsets","resourceVersion":"16941447"},"items":null}

Jan  2 18:01:39.211: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nck8v/pods","resourceVersion":"16941447"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:01:39.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nck8v" for this suite.
Jan  2 18:01:45.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:01:45.492: INFO: namespace: e2e-tests-daemonsets-nck8v, resource: bindings, ignored listing per whitelist
Jan  2 18:01:45.620: INFO: namespace e2e-tests-daemonsets-nck8v deletion completed in 6.397075196s

• [SLOW TEST:21.773 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:01:45.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-f08db3f6-2d89-11ea-b611-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-f08db3a3-2d89-11ea-b611-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  2 18:01:45.874: INFO: Waiting up to 5m0s for pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-dnw74" to be "success or failure"
Jan  2 18:01:45.881: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528057ms
Jan  2 18:01:48.333: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458387202s
Jan  2 18:01:50.361: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487276489s
Jan  2 18:01:53.533: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.658774021s
Jan  2 18:01:55.557: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.682605811s
Jan  2 18:01:57.568: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.693775448s
STEP: Saw pod success
Jan  2 18:01:57.568: INFO: Pod "projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:01:57.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  2 18:01:58.861: INFO: Waiting for pod projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005 to disappear
Jan  2 18:01:58.881: INFO: Pod projected-volume-f08db2f2-2d89-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:01:58.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dnw74" for this suite.
Jan  2 18:02:05.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:02:05.217: INFO: namespace: e2e-tests-projected-dnw74, resource: bindings, ignored listing per whitelist
Jan  2 18:02:05.241: INFO: namespace e2e-tests-projected-dnw74 deletion completed in 6.352681956s

• [SLOW TEST:19.621 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:02:05.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  2 18:02:18.249: INFO: Successfully updated pod "labelsupdatefc48d8c4-2d89-11ea-b611-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:02:20.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-69dmh" for this suite.
Jan  2 18:02:42.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:02:42.793: INFO: namespace: e2e-tests-downward-api-69dmh, resource: bindings, ignored listing per whitelist
Jan  2 18:02:42.874: INFO: namespace e2e-tests-downward-api-69dmh deletion completed in 22.475163834s

• [SLOW TEST:37.633 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:02:42.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  2 18:02:43.158: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941592,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 18:02:43.158: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941593,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  2 18:02:43.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941594,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  2 18:02:53.405: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941608,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 18:02:53.405: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941609,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  2 18:02:53.406: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-84p2q,SelfLink:/api/v1/namespaces/e2e-tests-watch-84p2q/configmaps/e2e-watch-test-label-changed,UID:12baa865-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16941610,Generation:0,CreationTimestamp:2020-01-02 18:02:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:02:53.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-84p2q" for this suite.
Jan  2 18:02:59.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:02:59.549: INFO: namespace: e2e-tests-watch-84p2q, resource: bindings, ignored listing per whitelist
Jan  2 18:02:59.656: INFO: namespace e2e-tests-watch-84p2q deletion completed in 6.229057001s

• [SLOW TEST:16.778 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:02:59.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  2 18:02:59.887: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-kzjv4" to be "success or failure"
Jan  2 18:02:59.912: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.063005ms
Jan  2 18:03:02.284: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397318611s
Jan  2 18:03:04.302: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41537902s
Jan  2 18:03:06.661: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.774038233s
Jan  2 18:03:08.682: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795209497s
Jan  2 18:03:10.741: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853592189s
Jan  2 18:03:12.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.07368773s
Jan  2 18:03:14.978: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.090708489s
STEP: Saw pod success
Jan  2 18:03:14.978: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  2 18:03:14.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  2 18:03:15.234: INFO: Waiting for pod pod-host-path-test to disappear
Jan  2 18:03:15.262: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:03:15.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-kzjv4" for this suite.
Jan  2 18:03:21.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:03:21.494: INFO: namespace: e2e-tests-hostpath-kzjv4, resource: bindings, ignored listing per whitelist
Jan  2 18:03:21.577: INFO: namespace e2e-tests-hostpath-kzjv4 deletion completed in 6.271065492s

• [SLOW TEST:21.921 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:03:21.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  2 18:03:21.932: INFO: Waiting up to 5m0s for pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-cspd9" to be "success or failure"
Jan  2 18:03:21.950: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.667809ms
Jan  2 18:03:24.594: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662369766s
Jan  2 18:03:26.655: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722695527s
Jan  2 18:03:28.760: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.828435949s
Jan  2 18:03:30.785: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.852658044s
Jan  2 18:03:32.805: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.872651049s
STEP: Saw pod success
Jan  2 18:03:32.805: INFO: Pod "pod-29c1fd1f-2d8a-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:03:32.818: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-29c1fd1f-2d8a-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:03:33.050: INFO: Waiting for pod pod-29c1fd1f-2d8a-11ea-b611-0242ac110005 to disappear
Jan  2 18:03:33.217: INFO: Pod pod-29c1fd1f-2d8a-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:03:33.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cspd9" for this suite.
Jan  2 18:03:39.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:03:39.476: INFO: namespace: e2e-tests-emptydir-cspd9, resource: bindings, ignored listing per whitelist
Jan  2 18:03:39.485: INFO: namespace e2e-tests-emptydir-cspd9 deletion completed in 6.251073098s

• [SLOW TEST:17.907 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:03:39.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:03:39.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:03:51.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4l52z" for this suite.
Jan  2 18:04:47.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:04:48.024: INFO: namespace: e2e-tests-pods-4l52z, resource: bindings, ignored listing per whitelist
Jan  2 18:04:48.064: INFO: namespace e2e-tests-pods-4l52z deletion completed in 56.237475517s

• [SLOW TEST:68.579 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:04:48.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-5d692396-2d8a-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:04:48.431: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-j52rw" to be "success or failure"
Jan  2 18:04:48.441: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640589ms
Jan  2 18:04:50.463: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032243134s
Jan  2 18:04:52.484: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052295265s
Jan  2 18:04:54.702: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270779148s
Jan  2 18:04:56.716: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285249368s
Jan  2 18:04:58.744: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.312393132s
STEP: Saw pod success
Jan  2 18:04:58.744: INFO: Pod "pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:04:58.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 18:04:58.954: INFO: Waiting for pod pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005 to disappear
Jan  2 18:04:58.982: INFO: Pod pod-projected-secrets-5d6a3315-2d8a-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:04:58.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j52rw" for this suite.
Jan  2 18:05:05.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:05:05.232: INFO: namespace: e2e-tests-projected-j52rw, resource: bindings, ignored listing per whitelist
Jan  2 18:05:05.291: INFO: namespace e2e-tests-projected-j52rw deletion completed in 6.283251416s

• [SLOW TEST:17.226 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:05:05.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 18:05:05.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fnt6d'
Jan  2 18:05:08.022: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 18:05:08.022: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  2 18:05:10.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-fnt6d'
Jan  2 18:05:10.504: INFO: stderr: ""
Jan  2 18:05:10.504: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:05:10.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fnt6d" for this suite.
Jan  2 18:05:16.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:05:16.790: INFO: namespace: e2e-tests-kubectl-fnt6d, resource: bindings, ignored listing per whitelist
Jan  2 18:05:16.881: INFO: namespace e2e-tests-kubectl-fnt6d deletion completed in 6.358507409s

• [SLOW TEST:11.590 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:05:16.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6e81c22e-2d8a-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:05:17.196: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-2wrxs" to be "success or failure"
Jan  2 18:05:17.220: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.08919ms
Jan  2 18:05:19.234: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037318381s
Jan  2 18:05:21.257: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061289234s
Jan  2 18:05:23.710: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513609884s
Jan  2 18:05:25.728: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531662711s
Jan  2 18:05:27.754: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.557803113s
Jan  2 18:05:29.770: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.573785791s
STEP: Saw pod success
Jan  2 18:05:29.770: INFO: Pod "pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:05:29.775: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 18:05:30.053: INFO: Waiting for pod pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005 to disappear
Jan  2 18:05:30.062: INFO: Pod pod-projected-secrets-6e834999-2d8a-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:05:30.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2wrxs" for this suite.
Jan  2 18:05:36.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:05:36.300: INFO: namespace: e2e-tests-projected-2wrxs, resource: bindings, ignored listing per whitelist
Jan  2 18:05:36.373: INFO: namespace e2e-tests-projected-2wrxs deletion completed in 6.302400898s

• [SLOW TEST:19.491 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:05:36.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:05:46.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9zcq6" for this suite.
Jan  2 18:05:53.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:05:53.174: INFO: namespace: e2e-tests-emptydir-wrapper-9zcq6, resource: bindings, ignored listing per whitelist
Jan  2 18:05:53.180: INFO: namespace e2e-tests-emptydir-wrapper-9zcq6 deletion completed in 6.224857056s

• [SLOW TEST:16.807 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:05:53.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  2 18:05:53.370: INFO: Waiting up to 5m0s for pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005" in namespace "e2e-tests-var-expansion-p8t54" to be "success or failure"
Jan  2 18:05:53.455: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.406423ms
Jan  2 18:05:55.470: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100138907s
Jan  2 18:05:57.502: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131845475s
Jan  2 18:05:59.624: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253818725s
Jan  2 18:06:01.635: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265289163s
Jan  2 18:06:03.648: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27779812s
STEP: Saw pod success
Jan  2 18:06:03.648: INFO: Pod "var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:06:03.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:06:04.522: INFO: Waiting for pod var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005 to disappear
Jan  2 18:06:04.867: INFO: Pod var-expansion-841e79b6-2d8a-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:06:04.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-p8t54" for this suite.
Jan  2 18:06:10.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:06:11.102: INFO: namespace: e2e-tests-var-expansion-p8t54, resource: bindings, ignored listing per whitelist
Jan  2 18:06:11.133: INFO: namespace e2e-tests-var-expansion-p8t54 deletion completed in 6.251861839s

• [SLOW TEST:17.953 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:06:11.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:06:11.396: INFO: Creating deployment "nginx-deployment"
Jan  2 18:06:11.413: INFO: Waiting for observed generation 1
Jan  2 18:06:14.690: INFO: Waiting for all required pods to come up
Jan  2 18:06:15.220: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  2 18:06:57.286: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  2 18:06:57.298: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  2 18:06:57.315: INFO: Updating deployment nginx-deployment
Jan  2 18:06:57.315: INFO: Waiting for observed generation 2
Jan  2 18:07:00.851: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  2 18:07:00.884: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  2 18:07:01.431: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 18:07:02.089: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  2 18:07:02.089: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  2 18:07:02.115: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  2 18:07:03.365: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  2 18:07:03.365: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  2 18:07:04.687: INFO: Updating deployment nginx-deployment
Jan  2 18:07:04.687: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  2 18:07:07.060: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  2 18:07:07.771: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 18:07:10.256: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8sf5f/deployments/nginx-deployment,UID:8edf79e6-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942291,Generation:3,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-02 18:06:58 +0000 UTC 2020-01-02 18:06:11 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-02 18:07:05 +0000 UTC 2020-01-02 18:07:05 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  2 18:07:10.540: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8sf5f/replicasets/nginx-deployment-5c98f8fb5,UID:aa3fc251-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942339,Generation:3,CreationTimestamp:2020-01-02 18:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8edf79e6-2d8a-11ea-a994-fa163e34d433 0xc00298c987 0xc00298c988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 18:07:10.540: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  2 18:07:10.540: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8sf5f/replicasets/nginx-deployment-85ddf47c5d,UID:8eeaf995-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942337,Generation:3,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8edf79e6-2d8a-11ea-a994-fa163e34d433 0xc00298ca47 0xc00298ca48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  2 18:07:12.995: INFO: Pod "nginx-deployment-5c98f8fb5-9bkc8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9bkc8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-9bkc8,UID:afae20ed-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942314,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d3a7 0xc00298d3a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:12.999: INFO: Pod "nginx-deployment-5c98f8fb5-gfg5j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gfg5j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-gfg5j,UID:aaf5ec1a-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942275,Generation:0,CreationTimestamp:2020-01-02 18:06:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d4a7 0xc00298d4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:07:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:12.999: INFO: Pod "nginx-deployment-5c98f8fb5-gtwkv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gtwkv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-gtwkv,UID:aa61f997-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942271,Generation:0,CreationTimestamp:2020-01-02 18:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d5f7 0xc00298d5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:06:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.000: INFO: Pod "nginx-deployment-5c98f8fb5-j9nj7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j9nj7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-j9nj7,UID:afe48ab6-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942328,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d747 0xc00298d748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d7b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.000: INFO: Pod "nginx-deployment-5c98f8fb5-jc2rz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jc2rz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-jc2rz,UID:aa6240c4-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942274,Generation:0,CreationTimestamp:2020-01-02 18:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d847 0xc00298d848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298d8b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298d8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:06:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.001: INFO: Pod "nginx-deployment-5c98f8fb5-kctnb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kctnb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-kctnb,UID:ab04428c-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942276,Generation:0,CreationTimestamp:2020-01-02 18:06:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298d997 0xc00298d998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298da00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298da20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:07:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.002: INFO: Pod "nginx-deployment-5c98f8fb5-lnngx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lnngx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-lnngx,UID:afe4705d-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942324,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298dae7 0xc00298dae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298db50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298db70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.003: INFO: Pod "nginx-deployment-5c98f8fb5-m4qbp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-m4qbp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-m4qbp,UID:afe47898-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942325,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc00298dbe7 0xc00298dbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00298dc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00298dc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.004: INFO: Pod "nginx-deployment-5c98f8fb5-mgtzj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mgtzj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-mgtzj,UID:afae2972-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942309,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc0026b77e7 0xc0026b77e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.004: INFO: Pod "nginx-deployment-5c98f8fb5-rmgsk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rmgsk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-rmgsk,UID:afe4a792-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942327,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc0026b78f7 0xc0026b78f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.004: INFO: Pod "nginx-deployment-5c98f8fb5-szjkc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-szjkc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-szjkc,UID:b01e7241-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942340,Generation:0,CreationTimestamp:2020-01-02 18:07:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc0026b79f7 0xc0026b79f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.004: INFO: Pod "nginx-deployment-5c98f8fb5-wnkkm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wnkkm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-wnkkm,UID:aa5c5d86-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942259,Generation:0,CreationTimestamp:2020-01-02 18:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc0026b7af7 0xc0026b7af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:06:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.004: INFO: Pod "nginx-deployment-5c98f8fb5-xgr7t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xgr7t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-5c98f8fb5-xgr7t,UID:aeec614c-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942350,Generation:0,CreationTimestamp:2020-01-02 18:07:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aa3fc251-2d8a-11ea-a994-fa163e34d433 0xc0026b7c47 0xc0026b7c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:07:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.005: INFO: Pod "nginx-deployment-85ddf47c5d-49gh9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-49gh9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-49gh9,UID:afe6ff34-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942330,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc0026b7d97 0xc0026b7d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.005: INFO: Pod "nginx-deployment-85ddf47c5d-4zhpm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4zhpm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-4zhpm,UID:8efbb93d-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942191,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc0026b7e97 0xc0026b7e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026b7f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026b7f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 18:06:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8c36dfa22998813ab270225dea9d3cec117b2a9141864726f7b68c331146b23d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.005: INFO: Pod "nginx-deployment-85ddf47c5d-68dq6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-68dq6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-68dq6,UID:8f0e69fb-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942204,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc0026b7fe7 0xc0026b7fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-02 18:06:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8fb6d0281967422ac1f2fa9356f2d2ca13221d19166da6294f9407cac338931f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.005: INFO: Pod "nginx-deployment-85ddf47c5d-6dntw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dntw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-6dntw,UID:afadcf0e-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942304,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0137 0xc002aa0138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa01a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa01c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.006: INFO: Pod "nginx-deployment-85ddf47c5d-75tck" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-75tck,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-75tck,UID:8ef12281-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942168,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0237 0xc002aa0238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa02a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa02c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 18:06:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6baf9ed130c947ad34765c276c3abe13c24df05a7988673addbe62804c61a400}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.006: INFO: Pod "nginx-deployment-85ddf47c5d-7bvws" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7bvws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-7bvws,UID:aeef624d-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942356,Generation:0,CreationTimestamp:2020-01-02 18:07:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0387 0xc002aa0388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa03f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:07:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.006: INFO: Pod "nginx-deployment-85ddf47c5d-9q76d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9q76d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-9q76d,UID:afe6d152-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942334,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa04c7 0xc002aa04c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.006: INFO: Pod "nginx-deployment-85ddf47c5d-cpt7l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cpt7l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-cpt7l,UID:8efb9890-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942196,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa05c7 0xc002aa05c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-02 18:06:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d9c9fa9579e4665d1c5ed8afc9fe2a6d32385719d136f1af52a9146c6688c629}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.007: INFO: Pod "nginx-deployment-85ddf47c5d-dz9c5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dz9c5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-dz9c5,UID:8f0e79dd-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942201,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0717 0xc002aa0718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa07a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-02 18:06:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a56373f13d1fbec33eabcfa803c18d1e574558961ebbbe68bd688bc0844f7a2d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.007: INFO: Pod "nginx-deployment-85ddf47c5d-gbnp7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gbnp7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-gbnp7,UID:aef0640b-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942301,Generation:0,CreationTimestamp:2020-01-02 18:07:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0867 0xc002aa0868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa08d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa08f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.007: INFO: Pod "nginx-deployment-85ddf47c5d-gbqth" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gbqth,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-gbqth,UID:afe76121-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942335,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0967 0xc002aa0968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa09d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa09f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.007: INFO: Pod "nginx-deployment-85ddf47c5d-kjf6w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kjf6w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-kjf6w,UID:afadd503-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942310,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0a67 0xc002aa0a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.007: INFO: Pod "nginx-deployment-85ddf47c5d-nxttr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nxttr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-nxttr,UID:afe64615-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942332,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0b67 0xc002aa0b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.008: INFO: Pod "nginx-deployment-85ddf47c5d-qmzws" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qmzws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-qmzws,UID:8ef9f4c7-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942221,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0c67 0xc002aa0c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-02 18:06:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1065b1a280b445ecfe3901b61abc249ae945b100c0855384cc40709f6f908fec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.008: INFO: Pod "nginx-deployment-85ddf47c5d-t2sqj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t2sqj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-t2sqj,UID:8efaac77-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942193,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0db7 0xc002aa0db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-02 18:06:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee89fee3f8415cccc10959faac880be01a074edf7e19ff445fd98cfae1a80d69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.008: INFO: Pod "nginx-deployment-85ddf47c5d-vwfdc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vwfdc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-vwfdc,UID:8f0e823c-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942217,Generation:0,CreationTimestamp:2020-01-02 18:06:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa0f07 0xc002aa0f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa0f70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa0f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:06:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-02 18:06:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:06:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f9b8a764e89e8ad7c817c026d05ebf44ee5040584c48f6535fe7d1de5507deff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.008: INFO: Pod "nginx-deployment-85ddf47c5d-w8sjh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w8sjh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-w8sjh,UID:afad33b4-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942308,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa1057 0xc002aa1058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa10c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa10e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.008: INFO: Pod "nginx-deployment-85ddf47c5d-wpwqf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wpwqf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-wpwqf,UID:afadf06d-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942313,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa1157 0xc002aa1158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa11c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa11e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.009: INFO: Pod "nginx-deployment-85ddf47c5d-x6gxm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x6gxm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-x6gxm,UID:aeaa0ff0-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942344,Generation:0,CreationTimestamp:2020-01-02 18:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa1257 0xc002aa1258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa12c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa12e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-02 18:07:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  2 18:07:13.009: INFO: Pod "nginx-deployment-85ddf47c5d-x8gwl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x8gwl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8sf5f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8sf5f/pods/nginx-deployment-85ddf47c5d-x8gwl,UID:afe69d35-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942331,Generation:0,CreationTimestamp:2020-01-02 18:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8eeaf995-2d8a-11ea-a994-fa163e34d433 0xc002aa1397 0xc002aa1398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hc5tn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hc5tn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hc5tn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa1400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa1420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:07:07 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:07:13.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8sf5f" for this suite.
Jan  2 18:08:21.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:08:21.878: INFO: namespace: e2e-tests-deployment-8sf5f, resource: bindings, ignored listing per whitelist
Jan  2 18:08:22.220: INFO: namespace e2e-tests-deployment-8sf5f deletion completed in 1m8.86858129s

• [SLOW TEST:131.086 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:08:22.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  2 18:08:57.974: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-dd2741ec-2d8a-11ea-b611-0242ac110005,GenerateName:,Namespace:e2e-tests-events-ldllz,SelfLink:/api/v1/namespaces/e2e-tests-events-ldllz/pods/send-events-dd2741ec-2d8a-11ea-b611-0242ac110005,UID:dd303937-2d8a-11ea-a994-fa163e34d433,ResourceVersion:16942669,Generation:0,CreationTimestamp:2020-01-02 18:08:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 729587967,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7lfnf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7lfnf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7lfnf true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001edab90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001edacb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:08:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:08:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:08:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:08:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 18:08:24 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-02 18:08:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://549430e9bd4d7c28c8955b4abf4cb99f83ec41ae30bc3a9785072b8c35ccfbf2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  2 18:08:59.992: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  2 18:09:02.017: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:09:02.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-ldllz" for this suite.
Jan  2 18:09:40.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:09:40.360: INFO: namespace: e2e-tests-events-ldllz, resource: bindings, ignored listing per whitelist
Jan  2 18:09:40.360: INFO: namespace e2e-tests-events-ldllz deletion completed in 38.297443508s

• [SLOW TEST:78.139 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:09:40.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:09:40.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-btgng" to be "success or failure"
Jan  2 18:09:40.719: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.246026ms
Jan  2 18:09:42.795: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092406816s
Jan  2 18:09:44.805: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103111527s
Jan  2 18:09:47.036: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.333289065s
Jan  2 18:09:49.069: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366585593s
Jan  2 18:09:51.087: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.38421471s
STEP: Saw pod success
Jan  2 18:09:51.087: INFO: Pod "downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:09:51.101: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:09:52.562: INFO: Waiting for pod downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005 to disappear
Jan  2 18:09:52.597: INFO: Pod downwardapi-volume-0b9e6cec-2d8b-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:09:52.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-btgng" for this suite.
Jan  2 18:09:58.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:09:58.898: INFO: namespace: e2e-tests-downward-api-btgng, resource: bindings, ignored listing per whitelist
Jan  2 18:09:58.913: INFO: namespace e2e-tests-downward-api-btgng deletion completed in 6.253928559s

• [SLOW TEST:18.553 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:09:58.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-169d967f-2d8b-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:09:59.344: INFO: Waiting up to 5m0s for pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-zrwr8" to be "success or failure"
Jan  2 18:09:59.349: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.887144ms
Jan  2 18:10:01.829: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484169815s
Jan  2 18:10:03.842: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497447756s
Jan  2 18:10:06.608: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.2630958s
Jan  2 18:10:08.628: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.283861956s
Jan  2 18:10:10.655: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.3102658s
STEP: Saw pod success
Jan  2 18:10:10.655: INFO: Pod "pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:10:10.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 18:10:10.726: INFO: Waiting for pod pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005 to disappear
Jan  2 18:10:10.735: INFO: Pod pod-secrets-16b9f7d4-2d8b-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:10:10.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zrwr8" for this suite.
Jan  2 18:10:16.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:10:16.967: INFO: namespace: e2e-tests-secrets-zrwr8, resource: bindings, ignored listing per whitelist
Jan  2 18:10:17.044: INFO: namespace e2e-tests-secrets-zrwr8 deletion completed in 6.302361636s
STEP: Destroying namespace "e2e-tests-secret-namespace-7t48c" for this suite.
Jan  2 18:10:23.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:10:23.126: INFO: namespace: e2e-tests-secret-namespace-7t48c, resource: bindings, ignored listing per whitelist
Jan  2 18:10:23.178: INFO: namespace e2e-tests-secret-namespace-7t48c deletion completed in 6.134178476s

• [SLOW TEST:24.265 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:10:23.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 18:10:23.453: INFO: Waiting up to 5m0s for pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-rlz7q" to be "success or failure"
Jan  2 18:10:23.510: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.914992ms
Jan  2 18:10:25.669: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216338341s
Jan  2 18:10:27.680: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226868957s
Jan  2 18:10:30.662: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.208738324s
Jan  2 18:10:32.679: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226309601s
Jan  2 18:10:34.718: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.264531808s
STEP: Saw pod success
Jan  2 18:10:34.718: INFO: Pod "downward-api-251568ee-2d8b-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:10:34.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-251568ee-2d8b-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:10:35.326: INFO: Waiting for pod downward-api-251568ee-2d8b-11ea-b611-0242ac110005 to disappear
Jan  2 18:10:35.336: INFO: Pod downward-api-251568ee-2d8b-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:10:35.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rlz7q" for this suite.
Jan  2 18:10:41.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:10:41.623: INFO: namespace: e2e-tests-downward-api-rlz7q, resource: bindings, ignored listing per whitelist
Jan  2 18:10:41.645: INFO: namespace e2e-tests-downward-api-rlz7q deletion completed in 6.302331993s

• [SLOW TEST:18.467 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:10:41.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-301ad474-2d8b-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:10:41.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-sm725" to be "success or failure"
Jan  2 18:10:41.937: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.773926ms
Jan  2 18:10:44.218: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290970399s
Jan  2 18:10:46.232: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304296548s
Jan  2 18:10:48.341: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41417647s
Jan  2 18:10:50.356: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4289867s
Jan  2 18:10:52.371: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.444099921s
STEP: Saw pod success
Jan  2 18:10:52.371: INFO: Pod "pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:10:52.376: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 18:10:53.611: INFO: Waiting for pod pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005 to disappear
Jan  2 18:10:53.644: INFO: Pod pod-projected-configmaps-301d26e1-2d8b-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:10:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sm725" for this suite.
Jan  2 18:10:59.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:10:59.908: INFO: namespace: e2e-tests-projected-sm725, resource: bindings, ignored listing per whitelist
Jan  2 18:10:59.914: INFO: namespace e2e-tests-projected-sm725 deletion completed in 6.253524013s

• [SLOW TEST:18.268 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:10:59.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8pbs2
Jan  2 18:11:12.168: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8pbs2
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 18:11:12.175: INFO: Initial restart count of pod liveness-http is 0
Jan  2 18:11:32.459: INFO: Restart count of pod e2e-tests-container-probe-8pbs2/liveness-http is now 1 (20.283547613s elapsed)
Jan  2 18:11:53.169: INFO: Restart count of pod e2e-tests-container-probe-8pbs2/liveness-http is now 2 (40.994026678s elapsed)
Jan  2 18:12:13.683: INFO: Restart count of pod e2e-tests-container-probe-8pbs2/liveness-http is now 3 (1m1.507369749s elapsed)
Jan  2 18:12:31.989: INFO: Restart count of pod e2e-tests-container-probe-8pbs2/liveness-http is now 4 (1m19.814024225s elapsed)
Jan  2 18:13:32.830: INFO: Restart count of pod e2e-tests-container-probe-8pbs2/liveness-http is now 5 (2m20.655022703s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:13:32.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-8pbs2" for this suite.
Jan  2 18:13:41.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:13:41.217: INFO: namespace: e2e-tests-container-probe-8pbs2, resource: bindings, ignored listing per whitelist
Jan  2 18:13:41.226: INFO: namespace e2e-tests-container-probe-8pbs2 deletion completed in 8.244250065s

• [SLOW TEST:161.311 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:13:41.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:13:41.480: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  2 18:13:46.804: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 18:13:51.337: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 18:13:51.469: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vr8lk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vr8lk/deployments/test-cleanup-deployment,UID:a1082efe-2d8b-11ea-a994-fa163e34d433,ResourceVersion:16943203,Generation:1,CreationTimestamp:2020-01-02 18:13:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  2 18:13:51.496: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  2 18:13:51.496: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  2 18:13:51.497: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-vr8lk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vr8lk/replicasets/test-cleanup-controller,UID:9b1570a3-2d8b-11ea-a994-fa163e34d433,ResourceVersion:16943205,Generation:1,CreationTimestamp:2020-01-02 18:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a1082efe-2d8b-11ea-a994-fa163e34d433 0xc001ccf7a7 0xc001ccf7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 18:13:51.524: INFO: Pod "test-cleanup-controller-rc94b" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-rc94b,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-vr8lk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vr8lk/pods/test-cleanup-controller-rc94b,UID:9b265597-2d8b-11ea-a994-fa163e34d433,ResourceVersion:16943200,Generation:0,CreationTimestamp:2020-01-02 18:13:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 9b1570a3-2d8b-11ea-a994-fa163e34d433 0xc001cd8397 0xc001cd8398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dq52w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dq52w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dq52w true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cd8400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cd8420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:13:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:13:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:13:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:13:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-02 18:13:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-02 18:13:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://45274d307006467e41468eaa5909d17db71465418d09fbf644e276dee4c4d80e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:13:51.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vr8lk" for this suite.
Jan  2 18:13:59.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:14:00.520: INFO: namespace: e2e-tests-deployment-vr8lk, resource: bindings, ignored listing per whitelist
Jan  2 18:14:01.064: INFO: namespace e2e-tests-deployment-vr8lk deletion completed in 9.455840226s

• [SLOW TEST:19.838 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:14:01.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 18:14:01.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-2z6zs'
Jan  2 18:14:01.680: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 18:14:01.680: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  2 18:14:05.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-2z6zs'
Jan  2 18:14:06.301: INFO: stderr: ""
Jan  2 18:14:06.301: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:14:06.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2z6zs" for this suite.
Jan  2 18:14:30.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:14:30.685: INFO: namespace: e2e-tests-kubectl-2z6zs, resource: bindings, ignored listing per whitelist
Jan  2 18:14:30.887: INFO: namespace e2e-tests-kubectl-2z6zs deletion completed in 24.553125198s

• [SLOW TEST:29.823 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:14:30.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:14:31.087: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 18:14:31.252: INFO: Number of nodes with available pods: 0
Jan  2 18:14:31.252: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:33.115: INFO: Number of nodes with available pods: 0
Jan  2 18:14:33.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:33.281: INFO: Number of nodes with available pods: 0
Jan  2 18:14:33.281: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:34.317: INFO: Number of nodes with available pods: 0
Jan  2 18:14:34.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:35.287: INFO: Number of nodes with available pods: 0
Jan  2 18:14:35.287: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:36.269: INFO: Number of nodes with available pods: 0
Jan  2 18:14:36.269: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:38.647: INFO: Number of nodes with available pods: 0
Jan  2 18:14:38.647: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:39.528: INFO: Number of nodes with available pods: 0
Jan  2 18:14:39.528: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:40.285: INFO: Number of nodes with available pods: 0
Jan  2 18:14:40.285: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:41.278: INFO: Number of nodes with available pods: 0
Jan  2 18:14:41.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:42.273: INFO: Number of nodes with available pods: 1
Jan  2 18:14:42.273: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  2 18:14:42.365: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:43.389: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:44.387: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:45.485: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:46.401: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:47.687: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:48.389: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:49.407: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:49.407: INFO: Pod daemon-set-2srfg is not available
Jan  2 18:14:50.396: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:50.396: INFO: Pod daemon-set-2srfg is not available
Jan  2 18:14:51.392: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:51.392: INFO: Pod daemon-set-2srfg is not available
Jan  2 18:14:52.394: INFO: Wrong image for pod: daemon-set-2srfg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  2 18:14:52.394: INFO: Pod daemon-set-2srfg is not available
Jan  2 18:14:53.389: INFO: Pod daemon-set-8cn8g is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  2 18:14:53.403: INFO: Number of nodes with available pods: 0
Jan  2 18:14:53.403: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:54.431: INFO: Number of nodes with available pods: 0
Jan  2 18:14:54.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:55.428: INFO: Number of nodes with available pods: 0
Jan  2 18:14:55.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:56.570: INFO: Number of nodes with available pods: 0
Jan  2 18:14:56.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:58.840: INFO: Number of nodes with available pods: 0
Jan  2 18:14:58.840: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:14:59.601: INFO: Number of nodes with available pods: 0
Jan  2 18:14:59.601: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:15:00.418: INFO: Number of nodes with available pods: 0
Jan  2 18:15:00.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:15:01.451: INFO: Number of nodes with available pods: 1
Jan  2 18:15:01.451: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-q6h2h, will wait for the garbage collector to delete the pods
Jan  2 18:15:01.583: INFO: Deleting DaemonSet.extensions daemon-set took: 25.73099ms
Jan  2 18:15:01.784: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.242831ms
Jan  2 18:15:12.669: INFO: Number of nodes with available pods: 0
Jan  2 18:15:12.669: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 18:15:12.674: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-q6h2h/daemonsets","resourceVersion":"16943438"},"items":null}

Jan  2 18:15:12.678: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-q6h2h/pods","resourceVersion":"16943438"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:15:12.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-q6h2h" for this suite.
Jan  2 18:15:20.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:15:20.899: INFO: namespace: e2e-tests-daemonsets-q6h2h, resource: bindings, ignored listing per whitelist
Jan  2 18:15:20.900: INFO: namespace e2e-tests-daemonsets-q6h2h deletion completed in 8.208825596s

• [SLOW TEST:50.012 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:15:20.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w6n8t
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-w6n8t
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-w6n8t
Jan  2 18:15:21.156: INFO: Found 0 stateful pods, waiting for 1
Jan  2 18:15:31.170: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  2 18:15:31.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 18:15:32.200: INFO: stderr: ""
Jan  2 18:15:32.200: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 18:15:32.200: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 18:15:32.213: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  2 18:15:42.241: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 18:15:42.241: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 18:15:42.301: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:15:42.301: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:15:42.302: INFO: 
Jan  2 18:15:42.302: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  2 18:15:44.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97094207s
Jan  2 18:15:45.391: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.231054908s
Jan  2 18:15:46.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.881188515s
Jan  2 18:15:47.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.608842232s
Jan  2 18:15:48.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.597831193s
Jan  2 18:15:50.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.552599918s
Jan  2 18:15:52.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 818.539037ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-w6n8t
Jan  2 18:15:53.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:15:53.960: INFO: stderr: ""
Jan  2 18:15:53.960: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 18:15:53.960: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 18:15:53.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:15:54.317: INFO: rc: 1
Jan  2 18:15:54.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00155c3c0 exit status 1   true [0xc00251e2a8 0xc00251e2c0 0xc00251e2e0] [0xc00251e2a8 0xc00251e2c0 0xc00251e2e0] [0xc00251e2b8 0xc00251e2d8] [0x935700 0x935700] 0xc0018ad740 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 18:16:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:05.028: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jan  2 18:16:05.028: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 18:16:05.028: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 18:16:05.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:05.447: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Jan  2 18:16:05.447: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  2 18:16:05.447: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  2 18:16:05.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 18:16:05.466: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  2 18:16:05.466: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  2 18:16:05.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 18:16:05.965: INFO: stderr: ""
Jan  2 18:16:05.965: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 18:16:05.965: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 18:16:05.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 18:16:06.441: INFO: stderr: ""
Jan  2 18:16:06.442: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 18:16:06.442: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 18:16:06.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  2 18:16:07.315: INFO: stderr: ""
Jan  2 18:16:07.315: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  2 18:16:07.315: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  2 18:16:07.315: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 18:16:07.326: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  2 18:16:17.357: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 18:16:17.357: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 18:16:17.357: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  2 18:16:17.441: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:17.442: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:17.442: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:17.442: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:17.442: INFO: 
Jan  2 18:16:17.442: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:18.457: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:18.457: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:18.457: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:18.457: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:18.457: INFO: 
Jan  2 18:16:18.457: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:19.832: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:19.832: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:19.833: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:19.833: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:19.833: INFO: 
Jan  2 18:16:19.833: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:20.877: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:20.877: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:20.877: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:20.877: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:20.877: INFO: 
Jan  2 18:16:20.877: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:21.911: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:21.912: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:21.912: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:21.912: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:21.912: INFO: 
Jan  2 18:16:21.912: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:22.939: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:22.939: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:22.939: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:22.939: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:22.939: INFO: 
Jan  2 18:16:22.939: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:24.332: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:24.332: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:24.333: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:24.333: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:24.333: INFO: 
Jan  2 18:16:24.333: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:25.514: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:25.514: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:25.514: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:25.514: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:25.514: INFO: 
Jan  2 18:16:25.515: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  2 18:16:26.547: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  2 18:16:26.547: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:21 +0000 UTC  }]
Jan  2 18:16:26.547: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:26.547: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:15:42 +0000 UTC  }]
Jan  2 18:16:26.547: INFO: 
Jan  2 18:16:26.547: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-w6n8t
Jan  2 18:16:27.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:27.842: INFO: rc: 1
Jan  2 18:16:27.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0007a4e70 exit status 1   true [0xc00251e3a8 0xc00251e3c0 0xc00251e3d8] [0xc00251e3a8 0xc00251e3c0 0xc00251e3d8] [0xc00251e3b8 0xc00251e3d0] [0x935700 0x935700] 0xc001f1af00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  2 18:16:37.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:38.032: INFO: rc: 1
Jan  2 18:16:38.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027194a0 exit status 1   true [0xc00034adb0 0xc00034adc8 0xc00034ae10] [0xc00034adb0 0xc00034adc8 0xc00034ae10] [0xc00034adc0 0xc00034ae08] [0x935700 0x935700] 0xc001bb18c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:16:48.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:48.165: INFO: rc: 1
Jan  2 18:16:48.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ddbb90 exit status 1   true [0xc000c82600 0xc000c82618 0xc000c82630] [0xc000c82600 0xc000c82618 0xc000c82630] [0xc000c82610 0xc000c82628] [0x935700 0x935700] 0xc001db2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:16:58.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:16:58.315: INFO: rc: 1
Jan  2 18:16:58.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000ddbce0 exit status 1   true [0xc000c82638 0xc000c82650 0xc000c82668] [0xc000c82638 0xc000c82650 0xc000c82668] [0xc000c82648 0xc000c82660] [0x935700 0x935700] 0xc001db2780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:08.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:08.479: INFO: rc: 1
Jan  2 18:17:08.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027195c0 exit status 1   true [0xc00034ae20 0xc00034ae40 0xc00034ae68] [0xc00034ae20 0xc00034ae40 0xc00034ae68] [0xc00034ae38 0xc00034ae58] [0x935700 0x935700] 0xc001bb1b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:18.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:18.697: INFO: rc: 1
Jan  2 18:17:18.698: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001530150 exit status 1   true [0xc001ff8048 0xc001ff8060 0xc001ff8078] [0xc001ff8048 0xc001ff8060 0xc001ff8078] [0xc001ff8058 0xc001ff8070] [0x935700 0x935700] 0xc001c98360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:28.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:28.991: INFO: rc: 1
Jan  2 18:17:28.991: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001382120 exit status 1   true [0xc00016e700 0xc00016e828 0xc00016e928] [0xc00016e700 0xc00016e828 0xc00016e928] [0xc00016e818 0xc00016e8c0] [0x935700 0x935700] 0xc001d1c300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:38.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:39.125: INFO: rc: 1
Jan  2 18:17:39.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aee150 exit status 1   true [0xc00251e000 0xc00251e018 0xc00251e030] [0xc00251e000 0xc00251e018 0xc00251e030] [0xc00251e010 0xc00251e028] [0x935700 0x935700] 0xc0018b2420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:49.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:49.453: INFO: rc: 1
Jan  2 18:17:49.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aee2a0 exit status 1   true [0xc00251e038 0xc00251e050 0xc00251e068] [0xc00251e038 0xc00251e050 0xc00251e068] [0xc00251e048 0xc00251e060] [0x935700 0x935700] 0xc0018b2a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:17:59.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:17:59.625: INFO: rc: 1
Jan  2 18:17:59.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000eea120 exit status 1   true [0xc00206e000 0xc00206e018 0xc00206e038] [0xc00206e000 0xc00206e018 0xc00206e038] [0xc00206e010 0xc00206e030] [0x935700 0x935700] 0xc0018ac6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:18:09.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:18:09.836: INFO: rc: 1
Jan  2 18:18:09.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c150 exit status 1   true [0xc0011e8000 0xc0011e8030 0xc0011e8080] [0xc0011e8000 0xc0011e8030 0xc0011e8080] [0xc0011e8028 0xc0011e8068] [0x935700 0x935700] 0xc001abc4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:18:19.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:18:19.993: INFO: rc: 1
Jan  2 18:18:19.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c2a0 exit status 1   true [0xc0011e8088 0xc0011e80c0 0xc0011e80e8] [0xc0011e8088 0xc0011e80c0 0xc0011e80e8] [0xc0011e80a0 0xc0011e80e0] [0x935700 0x935700] 0xc001abc900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:18:29.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:18:30.165: INFO: rc: 1
Jan  2 18:18:30.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000eea300 exit status 1   true [0xc00206e040 0xc00206e058 0xc00206e070] [0xc00206e040 0xc00206e058 0xc00206e070] [0xc00206e050 0xc00206e068] [0x935700 0x935700] 0xc0018ac960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:18:40.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:18:40.341: INFO: rc: 1
Jan  2 18:18:40.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c3f0 exit status 1   true [0xc0011e80f0 0xc0011e8108 0xc0011e8120] [0xc0011e80f0 0xc0011e8108 0xc0011e8120] [0xc0011e8100 0xc0011e8118] [0x935700 0x935700] 0xc001abd1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:18:50.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:18:50.598: INFO: rc: 1
Jan  2 18:18:50.599: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c540 exit status 1   true [0xc0011e8128 0xc0011e8148 0xc0011e8160] [0xc0011e8128 0xc0011e8148 0xc0011e8160] [0xc0011e8140 0xc0011e8158] [0x935700 0x935700] 0xc001abd440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:00.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:00.848: INFO: rc: 1
Jan  2 18:19:00.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001382270 exit status 1   true [0xc00016e938 0xc00016e9f0 0xc00016ead0] [0xc00016e938 0xc00016e9f0 0xc00016ead0] [0xc00016e9c8 0xc00016ea40] [0x935700 0x935700] 0xc001d1c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:10.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:10.976: INFO: rc: 1
Jan  2 18:19:10.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c6f0 exit status 1   true [0xc0011e8168 0xc0011e8180 0xc0011e8198] [0xc0011e8168 0xc0011e8180 0xc0011e8198] [0xc0011e8178 0xc0011e8190] [0x935700 0x935700] 0xc001abd6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:20.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:21.140: INFO: rc: 1
Jan  2 18:19:21.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000eea4b0 exit status 1   true [0xc0011e81a0 0xc0011e81b8 0xc00206e078] [0xc0011e81a0 0xc0011e81b8 0xc00206e078] [0xc0011e81b0 0xc0011e81c8] [0x935700 0x935700] 0xc0018aca80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:31.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:31.313: INFO: rc: 1
Jan  2 18:19:31.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c120 exit status 1   true [0xc0011e8020 0xc0011e8048 0xc0011e8088] [0xc0011e8020 0xc0011e8048 0xc0011e8088] [0xc0011e8030 0xc0011e8080] [0x935700 0x935700] 0xc001abc4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:41.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:41.499: INFO: rc: 1
Jan  2 18:19:41.499: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c270 exit status 1   true [0xc0011e8090 0xc0011e80d8 0xc0011e80f0] [0xc0011e8090 0xc0011e80d8 0xc0011e80f0] [0xc0011e80c0 0xc0011e80e8] [0x935700 0x935700] 0xc001abc900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:19:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:19:51.673: INFO: rc: 1
Jan  2 18:19:51.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001382150 exit status 1   true [0xc00206e000 0xc00206e018 0xc00206e038] [0xc00206e000 0xc00206e018 0xc00206e038] [0xc00206e010 0xc00206e030] [0x935700 0x935700] 0xc0018ac6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:01.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:01.900: INFO: rc: 1
Jan  2 18:20:01.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c420 exit status 1   true [0xc0011e80f8 0xc0011e8110 0xc0011e8128] [0xc0011e80f8 0xc0011e8110 0xc0011e8128] [0xc0011e8108 0xc0011e8120] [0x935700 0x935700] 0xc001abd1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:11.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:12.097: INFO: rc: 1
Jan  2 18:20:12.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c570 exit status 1   true [0xc0011e8138 0xc0011e8150 0xc0011e8168] [0xc0011e8138 0xc0011e8150 0xc0011e8168] [0xc0011e8148 0xc0011e8160] [0x935700 0x935700] 0xc001abd440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:22.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:22.398: INFO: rc: 1
Jan  2 18:20:22.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aee180 exit status 1   true [0xc00016e700 0xc00016e828 0xc00016e928] [0xc00016e700 0xc00016e828 0xc00016e928] [0xc00016e818 0xc00016e8c0] [0x935700 0x935700] 0xc001d1c300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:32.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:32.660: INFO: rc: 1
Jan  2 18:20:32.661: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000eea180 exit status 1   true [0xc00251e000 0xc00251e018 0xc00251e030] [0xc00251e000 0xc00251e018 0xc00251e030] [0xc00251e010 0xc00251e028] [0x935700 0x935700] 0xc0018b2420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:42.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:42.806: INFO: rc: 1
Jan  2 18:20:42.806: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013822d0 exit status 1   true [0xc00206e040 0xc00206e058 0xc00206e070] [0xc00206e040 0xc00206e058 0xc00206e070] [0xc00206e050 0xc00206e068] [0x935700 0x935700] 0xc0018ac960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:20:52.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:20:52.952: INFO: rc: 1
Jan  2 18:20:52.953: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c6c0 exit status 1   true [0xc0011e8170 0xc0011e8188 0xc0011e81a8] [0xc0011e8170 0xc0011e8188 0xc0011e81a8] [0xc0011e8180 0xc0011e8198] [0x935700 0x935700] 0xc001abd6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:21:02.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:21:03.164: INFO: rc: 1
Jan  2 18:21:03.164: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aee360 exit status 1   true [0xc00016e938 0xc00016e9f0 0xc00016ead0] [0xc00016e938 0xc00016e9f0 0xc00016ead0] [0xc00016e9c8 0xc00016ea40] [0x935700 0x935700] 0xc001d1c5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:21:13.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:21:13.350: INFO: rc: 1
Jan  2 18:21:13.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001aee4e0 exit status 1   true [0xc00016eaf8 0xc00016eb50 0xc00016ebb0] [0xc00016eaf8 0xc00016eb50 0xc00016ebb0] [0xc00016eb28 0xc00016eb80] [0x935700 0x935700] 0xc001d1d3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:21:23.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:21:23.488: INFO: rc: 1
Jan  2 18:21:23.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00155c150 exit status 1   true [0xc00206e008 0xc00206e020 0xc00206e040] [0xc00206e008 0xc00206e020 0xc00206e040] [0xc00206e018 0xc00206e038] [0x935700 0x935700] 0xc001abc4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  2 18:21:33.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w6n8t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  2 18:21:33.695: INFO: rc: 1
Jan  2 18:21:33.695: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan  2 18:21:33.695: INFO: Scaling statefulset ss to 0
Jan  2 18:21:33.718: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  2 18:21:33.721: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w6n8t
Jan  2 18:21:33.725: INFO: Scaling statefulset ss to 0
Jan  2 18:21:33.737: INFO: Waiting for statefulset status.replicas updated to 0
Jan  2 18:21:33.739: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:21:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w6n8t" for this suite.
Jan  2 18:21:41.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:21:42.047: INFO: namespace: e2e-tests-statefulset-w6n8t, resource: bindings, ignored listing per whitelist
Jan  2 18:21:42.053: INFO: namespace e2e-tests-statefulset-w6n8t deletion completed in 8.217399794s

• [SLOW TEST:381.153 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:21:42.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b9d82df9-2d8c-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:21:42.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-ggmbv" to be "success or failure"
Jan  2 18:21:42.938: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 174.429104ms
Jan  2 18:21:44.976: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211991343s
Jan  2 18:21:47.004: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239956999s
Jan  2 18:21:50.094: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.330336833s
Jan  2 18:21:52.110: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346029542s
Jan  2 18:21:54.125: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.361659663s
STEP: Saw pod success
Jan  2 18:21:54.126: INFO: Pod "pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:21:54.134: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 18:21:54.413: INFO: Waiting for pod pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:21:54.428: INFO: Pod pod-configmaps-b9f9cf79-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:21:54.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ggmbv" for this suite.
Jan  2 18:22:00.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:22:00.866: INFO: namespace: e2e-tests-configmap-ggmbv, resource: bindings, ignored listing per whitelist
Jan  2 18:22:00.872: INFO: namespace e2e-tests-configmap-ggmbv deletion completed in 6.43227351s

• [SLOW TEST:18.818 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:22:00.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-c4e932c6-2d8c-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:22:01.147: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-tmw5z" to be "success or failure"
Jan  2 18:22:01.159: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.648179ms
Jan  2 18:22:03.235: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087027708s
Jan  2 18:22:05.276: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128515171s
Jan  2 18:22:07.649: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501545097s
Jan  2 18:22:09.662: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514048103s
Jan  2 18:22:11.682: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.53414269s
STEP: Saw pod success
Jan  2 18:22:11.682: INFO: Pod "pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:22:11.697: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 18:22:11.895: INFO: Waiting for pod pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:22:11.908: INFO: Pod pod-projected-secrets-c4e9da49-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:22:11.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tmw5z" for this suite.
Jan  2 18:22:17.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:22:18.027: INFO: namespace: e2e-tests-projected-tmw5z, resource: bindings, ignored listing per whitelist
Jan  2 18:22:18.072: INFO: namespace e2e-tests-projected-tmw5z deletion completed in 6.152902308s

• [SLOW TEST:17.199 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:22:18.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  2 18:22:18.372: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix645648798/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:22:18.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4g5fd" for this suite.
Jan  2 18:22:24.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:22:24.734: INFO: namespace: e2e-tests-kubectl-4g5fd, resource: bindings, ignored listing per whitelist
Jan  2 18:22:24.781: INFO: namespace e2e-tests-kubectl-4g5fd deletion completed in 6.329235441s

• [SLOW TEST:6.709 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:22:24.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d32c3ce9-2d8c-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:22:25.069: INFO: Waiting up to 5m0s for pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-vsnr7" to be "success or failure"
Jan  2 18:22:25.120: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.512448ms
Jan  2 18:22:27.224: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154422669s
Jan  2 18:22:29.335: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265803616s
Jan  2 18:22:31.420: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350511884s
Jan  2 18:22:33.431: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361549072s
Jan  2 18:22:35.448: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378692362s
STEP: Saw pod success
Jan  2 18:22:35.448: INFO: Pod "pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:22:35.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 18:22:36.320: INFO: Waiting for pod pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:22:36.368: INFO: Pod pod-secrets-d32d4c01-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:22:36.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vsnr7" for this suite.
Jan  2 18:22:42.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:22:42.712: INFO: namespace: e2e-tests-secrets-vsnr7, resource: bindings, ignored listing per whitelist
Jan  2 18:22:42.719: INFO: namespace e2e-tests-secrets-vsnr7 deletion completed in 6.168128986s

• [SLOW TEST:17.938 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:22:42.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 18:22:42.846: INFO: Waiting up to 5m0s for pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-tfjnj" to be "success or failure"
Jan  2 18:22:42.899: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.913184ms
Jan  2 18:22:44.926: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079806912s
Jan  2 18:22:46.946: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099937291s
Jan  2 18:22:48.974: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127708487s
Jan  2 18:22:50.992: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145648093s
Jan  2 18:22:53.028: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181909013s
STEP: Saw pod success
Jan  2 18:22:53.029: INFO: Pod "downward-api-ddd09388-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:22:53.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ddd09388-2d8c-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:22:53.227: INFO: Waiting for pod downward-api-ddd09388-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:22:53.234: INFO: Pod downward-api-ddd09388-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:22:53.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tfjnj" for this suite.
Jan  2 18:22:59.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:22:59.422: INFO: namespace: e2e-tests-downward-api-tfjnj, resource: bindings, ignored listing per whitelist
Jan  2 18:22:59.520: INFO: namespace e2e-tests-downward-api-tfjnj deletion completed in 6.277166722s

• [SLOW TEST:16.800 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:22:59.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:22:59.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-cpbt2" to be "success or failure"
Jan  2 18:22:59.801: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.722835ms
Jan  2 18:23:01.847: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103642042s
Jan  2 18:23:03.879: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135533536s
Jan  2 18:23:05.908: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164000289s
Jan  2 18:23:08.314: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570690183s
Jan  2 18:23:10.340: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.596123275s
STEP: Saw pod success
Jan  2 18:23:10.340: INFO: Pod "downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:23:10.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:23:10.943: INFO: Waiting for pod downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:23:10.962: INFO: Pod downwardapi-volume-e7e2cca2-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:23:10.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cpbt2" for this suite.
Jan  2 18:23:17.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:23:17.154: INFO: namespace: e2e-tests-projected-cpbt2, resource: bindings, ignored listing per whitelist
Jan  2 18:23:17.182: INFO: namespace e2e-tests-projected-cpbt2 deletion completed in 6.211472934s

• [SLOW TEST:17.662 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:23:17.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  2 18:23:17.376: INFO: Waiting up to 5m0s for pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-bflfh" to be "success or failure"
Jan  2 18:23:17.383: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475291ms
Jan  2 18:23:19.396: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019293497s
Jan  2 18:23:21.405: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02842416s
Jan  2 18:23:23.614: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237970428s
Jan  2 18:23:25.628: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252235704s
Jan  2 18:23:27.652: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275921445s
STEP: Saw pod success
Jan  2 18:23:27.652: INFO: Pod "pod-f2652b22-2d8c-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:23:27.658: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f2652b22-2d8c-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:23:27.908: INFO: Waiting for pod pod-f2652b22-2d8c-11ea-b611-0242ac110005 to disappear
Jan  2 18:23:27.922: INFO: Pod pod-f2652b22-2d8c-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:23:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bflfh" for this suite.
Jan  2 18:23:33.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:23:34.073: INFO: namespace: e2e-tests-emptydir-bflfh, resource: bindings, ignored listing per whitelist
Jan  2 18:23:34.240: INFO: namespace e2e-tests-emptydir-bflfh deletion completed in 6.311105713s

• [SLOW TEST:17.058 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:23:34.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:23:41.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-g8vcw" for this suite.
Jan  2 18:23:47.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:23:47.152: INFO: namespace: e2e-tests-namespaces-g8vcw, resource: bindings, ignored listing per whitelist
Jan  2 18:23:47.325: INFO: namespace e2e-tests-namespaces-g8vcw deletion completed in 6.229177243s
STEP: Destroying namespace "e2e-tests-nsdeletetest-lfjs6" for this suite.
Jan  2 18:23:47.332: INFO: Namespace e2e-tests-nsdeletetest-lfjs6 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-tr26k" for this suite.
Jan  2 18:23:53.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:23:53.499: INFO: namespace: e2e-tests-nsdeletetest-tr26k, resource: bindings, ignored listing per whitelist
Jan  2 18:23:53.537: INFO: namespace e2e-tests-nsdeletetest-tr26k deletion completed in 6.204852782s

• [SLOW TEST:19.295 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:23:53.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-081e9e96-2d8d-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:23:53.860: INFO: Waiting up to 5m0s for pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-j9fq4" to be "success or failure"
Jan  2 18:23:53.946: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.712007ms
Jan  2 18:23:55.962: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10158912s
Jan  2 18:23:57.979: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119247157s
Jan  2 18:24:00.304: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443530628s
Jan  2 18:24:02.321: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460851344s
Jan  2 18:24:04.369: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.508704168s
STEP: Saw pod success
Jan  2 18:24:04.369: INFO: Pod "pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:24:04.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 18:24:04.687: INFO: Waiting for pod pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:24:04.717: INFO: Pod pod-secrets-082241f1-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:24:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j9fq4" for this suite.
Jan  2 18:24:12.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:24:12.947: INFO: namespace: e2e-tests-secrets-j9fq4, resource: bindings, ignored listing per whitelist
Jan  2 18:24:13.045: INFO: namespace e2e-tests-secrets-j9fq4 deletion completed in 8.308569786s

• [SLOW TEST:19.508 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:24:13.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 18:24:13.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mbwcg'
Jan  2 18:24:15.550: INFO: stderr: ""
Jan  2 18:24:15.550: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  2 18:24:30.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mbwcg -o json'
Jan  2 18:24:30.865: INFO: stderr: ""
Jan  2 18:24:30.865: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-02T18:24:15Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-mbwcg\",\n        \"resourceVersion\": \"16944504\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mbwcg/pods/e2e-test-nginx-pod\",\n        \"uid\": \"1508fd33-2d8d-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-mr45n\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-mr45n\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-mr45n\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T18:24:15Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T18:24:26Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T18:24:26Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-02T18:24:15Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f8a31b7ae99820828818fb71695ba351374aaef335aff796f17be66076adb286\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-02T18:24:25Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-02T18:24:15Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  2 18:24:30.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mbwcg'
Jan  2 18:24:31.287: INFO: stderr: ""
Jan  2 18:24:31.287: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  2 18:24:31.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mbwcg'
Jan  2 18:24:39.455: INFO: stderr: ""
Jan  2 18:24:39.456: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:24:39.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mbwcg" for this suite.
Jan  2 18:24:47.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:24:47.933: INFO: namespace: e2e-tests-kubectl-mbwcg, resource: bindings, ignored listing per whitelist
Jan  2 18:24:47.998: INFO: namespace e2e-tests-kubectl-mbwcg deletion completed in 8.385704314s

• [SLOW TEST:34.953 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:24:47.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  2 18:24:48.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  2 18:24:48.344: INFO: stderr: ""
Jan  2 18:24:48.344: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:24:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c248s" for this suite.
Jan  2 18:24:54.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:24:54.681: INFO: namespace: e2e-tests-kubectl-c248s, resource: bindings, ignored listing per whitelist
Jan  2 18:24:54.692: INFO: namespace e2e-tests-kubectl-c248s deletion completed in 6.336040688s

• [SLOW TEST:6.693 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:24:54.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 18:25:15.294: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:15.416: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:17.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:17.428: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:19.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:19.441: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:21.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:21.435: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:23.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:23.434: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:25.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:25.428: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:27.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:27.435: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:29.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:29.432: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:31.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:31.436: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:33.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:33.446: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:35.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:35.434: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:37.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:37.443: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:39.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:39.430: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:41.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:41.435: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  2 18:25:43.417: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  2 18:25:43.438: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:25:43.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ffwvr" for this suite.
Jan  2 18:26:07.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:26:07.705: INFO: namespace: e2e-tests-container-lifecycle-hook-ffwvr, resource: bindings, ignored listing per whitelist
Jan  2 18:26:07.718: INFO: namespace e2e-tests-container-lifecycle-hook-ffwvr deletion completed in 24.231531129s

• [SLOW TEST:73.026 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:26:07.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  2 18:26:07.984: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:26:31.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-crkvh" for this suite.
Jan  2 18:26:55.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:26:55.270: INFO: namespace: e2e-tests-init-container-crkvh, resource: bindings, ignored listing per whitelist
Jan  2 18:26:55.302: INFO: namespace e2e-tests-init-container-crkvh deletion completed in 24.277196254s

• [SLOW TEST:47.583 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:26:55.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-74a12336-2d8d-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:26:55.943: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-7x7w4" to be "success or failure"
Jan  2 18:26:56.003: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.980434ms
Jan  2 18:26:58.261: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318703608s
Jan  2 18:27:00.280: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337155472s
Jan  2 18:27:02.440: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496876823s
Jan  2 18:27:04.519: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576306589s
Jan  2 18:27:06.647: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.704575073s
STEP: Saw pod success
Jan  2 18:27:06.647: INFO: Pod "pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:27:06.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 18:27:06.856: INFO: Waiting for pod pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:27:06.866: INFO: Pod pod-projected-secrets-74ab6e4b-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:27:06.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7x7w4" for this suite.
Jan  2 18:27:12.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:27:12.960: INFO: namespace: e2e-tests-projected-7x7w4, resource: bindings, ignored listing per whitelist
Jan  2 18:27:13.094: INFO: namespace e2e-tests-projected-7x7w4 deletion completed in 6.216798935s

• [SLOW TEST:17.792 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:27:13.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 18:27:13.276: INFO: Waiting up to 5m0s for pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-fw6xj" to be "success or failure"
Jan  2 18:27:13.288: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.049744ms
Jan  2 18:27:15.322: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045471966s
Jan  2 18:27:17.335: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058712228s
Jan  2 18:27:19.897: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620979536s
Jan  2 18:27:21.912: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63559308s
Jan  2 18:27:23.925: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.648482605s
STEP: Saw pod success
Jan  2 18:27:23.925: INFO: Pod "downward-api-7f012073-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:27:23.928: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7f012073-2d8d-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:27:24.308: INFO: Waiting for pod downward-api-7f012073-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:27:24.555: INFO: Pod downward-api-7f012073-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:27:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fw6xj" for this suite.
Jan  2 18:27:30.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:27:30.742: INFO: namespace: e2e-tests-downward-api-fw6xj, resource: bindings, ignored listing per whitelist
Jan  2 18:27:30.756: INFO: namespace e2e-tests-downward-api-fw6xj deletion completed in 6.179360795s

• [SLOW TEST:17.662 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:27:30.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:27:31.014: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.139697ms)
Jan  2 18:27:31.022: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.136349ms)
Jan  2 18:27:31.027: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.68535ms)
Jan  2 18:27:31.032: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.59212ms)
Jan  2 18:27:31.037: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.00247ms)
Jan  2 18:27:31.042: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.862481ms)
Jan  2 18:27:31.047: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.328587ms)
Jan  2 18:27:31.053: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.980003ms)
Jan  2 18:27:31.058: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.585438ms)
Jan  2 18:27:31.063: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.251019ms)
Jan  2 18:27:31.070: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.637938ms)
Jan  2 18:27:31.162: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 92.033871ms)
Jan  2 18:27:31.179: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.211652ms)
Jan  2 18:27:31.204: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.665966ms)
Jan  2 18:27:31.212: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.813751ms)
Jan  2 18:27:31.218: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.565051ms)
Jan  2 18:27:31.223: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.722593ms)
Jan  2 18:27:31.229: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.516517ms)
Jan  2 18:27:31.236: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.962751ms)
Jan  2 18:27:31.243: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.390473ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:27:31.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-9ckr5" for this suite.
Jan  2 18:27:37.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:27:37.493: INFO: namespace: e2e-tests-proxy-9ckr5, resource: bindings, ignored listing per whitelist
Jan  2 18:27:37.655: INFO: namespace e2e-tests-proxy-9ckr5 deletion completed in 6.405955472s

• [SLOW TEST:6.898 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:27:37.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:27:37.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  2 18:27:38.048: INFO: stderr: ""
Jan  2 18:27:38.048: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:27:38.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wnwct" for this suite.
Jan  2 18:27:44.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:27:44.189: INFO: namespace: e2e-tests-kubectl-wnwct, resource: bindings, ignored listing per whitelist
Jan  2 18:27:44.259: INFO: namespace e2e-tests-kubectl-wnwct deletion completed in 6.196731999s

• [SLOW TEST:6.604 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:27:44.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  2 18:27:44.488: INFO: namespace e2e-tests-kubectl-5pgsx
Jan  2 18:27:44.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5pgsx'
Jan  2 18:27:45.103: INFO: stderr: ""
Jan  2 18:27:45.104: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 18:27:46.118: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:46.118: INFO: Found 0 / 1
Jan  2 18:27:47.127: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:47.128: INFO: Found 0 / 1
Jan  2 18:27:48.139: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:48.140: INFO: Found 0 / 1
Jan  2 18:27:49.134: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:49.134: INFO: Found 0 / 1
Jan  2 18:27:50.128: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:50.128: INFO: Found 0 / 1
Jan  2 18:27:51.149: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:51.149: INFO: Found 0 / 1
Jan  2 18:27:52.217: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:52.217: INFO: Found 0 / 1
Jan  2 18:27:53.121: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:53.121: INFO: Found 0 / 1
Jan  2 18:27:54.189: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:54.189: INFO: Found 0 / 1
Jan  2 18:27:55.147: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:55.147: INFO: Found 0 / 1
Jan  2 18:27:56.148: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:56.148: INFO: Found 1 / 1
Jan  2 18:27:56.148: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  2 18:27:56.154: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:27:56.154: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 18:27:56.154: INFO: wait on redis-master startup in e2e-tests-kubectl-5pgsx 
Jan  2 18:27:56.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8cn5p redis-master --namespace=e2e-tests-kubectl-5pgsx'
Jan  2 18:27:56.489: INFO: stderr: ""
Jan  2 18:27:56.489: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Jan 18:27:54.704 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jan 18:27:54.704 # Server started, Redis version 3.2.12\n1:M 02 Jan 18:27:54.704 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jan 18:27:54.704 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  2 18:27:56.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-5pgsx'
Jan  2 18:27:56.761: INFO: stderr: ""
Jan  2 18:27:56.761: INFO: stdout: "service/rm2 exposed\n"
Jan  2 18:27:56.771: INFO: Service rm2 in namespace e2e-tests-kubectl-5pgsx found.
STEP: exposing service
Jan  2 18:27:58.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-5pgsx'
Jan  2 18:27:59.041: INFO: stderr: ""
Jan  2 18:27:59.041: INFO: stdout: "service/rm3 exposed\n"
Jan  2 18:27:59.068: INFO: Service rm3 in namespace e2e-tests-kubectl-5pgsx found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:28:01.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5pgsx" for this suite.
Jan  2 18:28:27.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:28:27.336: INFO: namespace: e2e-tests-kubectl-5pgsx, resource: bindings, ignored listing per whitelist
Jan  2 18:28:27.482: INFO: namespace e2e-tests-kubectl-5pgsx deletion completed in 26.366151318s

• [SLOW TEST:43.222 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:28:27.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:28:27.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  2 18:28:27.719: INFO: stderr: ""
Jan  2 18:28:27.719: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  2 18:28:27.725: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:28:27.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-htsvq" for this suite.
Jan  2 18:28:33.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:28:34.107: INFO: namespace: e2e-tests-kubectl-htsvq, resource: bindings, ignored listing per whitelist
Jan  2 18:28:34.146: INFO: namespace e2e-tests-kubectl-htsvq deletion completed in 6.370546191s

S [SKIPPING] [6.664 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan  2 18:28:27.725: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:28:34.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:28:34.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-skrjg" to be "success or failure"
Jan  2 18:28:34.465: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.097725ms
Jan  2 18:28:36.507: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094445408s
Jan  2 18:28:38.531: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117771324s
Jan  2 18:28:41.833: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.42034877s
Jan  2 18:28:43.846: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.433048535s
Jan  2 18:28:45.870: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.456692008s
STEP: Saw pod success
Jan  2 18:28:45.870: INFO: Pod "downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:28:45.879: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:28:47.191: INFO: Waiting for pod downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:28:47.212: INFO: Pod downwardapi-volume-af59dc35-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:28:47.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-skrjg" for this suite.
Jan  2 18:28:53.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:28:53.483: INFO: namespace: e2e-tests-projected-skrjg, resource: bindings, ignored listing per whitelist
Jan  2 18:28:53.508: INFO: namespace e2e-tests-projected-skrjg deletion completed in 6.288340286s

• [SLOW TEST:19.362 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:28:53.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 18:28:53.745: INFO: Waiting up to 5m0s for pod "pod-badeab26-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-7bmwq" to be "success or failure"
Jan  2 18:28:53.773: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.575679ms
Jan  2 18:28:55.792: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047256556s
Jan  2 18:28:57.814: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068837835s
Jan  2 18:28:59.829: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084200207s
Jan  2 18:29:01.842: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097296689s
Jan  2 18:29:03.884: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139410585s
STEP: Saw pod success
Jan  2 18:29:03.884: INFO: Pod "pod-badeab26-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:29:03.904: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-badeab26-2d8d-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:29:04.018: INFO: Waiting for pod pod-badeab26-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:29:04.030: INFO: Pod pod-badeab26-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:29:04.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7bmwq" for this suite.
Jan  2 18:29:10.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:29:10.288: INFO: namespace: e2e-tests-emptydir-7bmwq, resource: bindings, ignored listing per whitelist
Jan  2 18:29:10.410: INFO: namespace e2e-tests-emptydir-7bmwq deletion completed in 6.364433979s

• [SLOW TEST:16.901 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:29:10.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0102 18:29:21.012905       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  2 18:29:21.013: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:29:21.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hz4ff" for this suite.
Jan  2 18:29:29.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:29:29.243: INFO: namespace: e2e-tests-gc-hz4ff, resource: bindings, ignored listing per whitelist
Jan  2 18:29:29.339: INFO: namespace e2e-tests-gc-hz4ff deletion completed in 8.321720416s

• [SLOW TEST:18.929 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:29:29.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  2 18:29:29.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:30.122: INFO: stderr: ""
Jan  2 18:29:30.122: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 18:29:30.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:30.399: INFO: stderr: ""
Jan  2 18:29:30.400: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
Jan  2 18:29:30.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkdhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:30.618: INFO: stderr: ""
Jan  2 18:29:30.618: INFO: stdout: ""
Jan  2 18:29:30.618: INFO: update-demo-nautilus-nkdhv is created but not running
Jan  2 18:29:35.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:35.827: INFO: stderr: ""
Jan  2 18:29:35.827: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
Jan  2 18:29:35.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkdhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:35.981: INFO: stderr: ""
Jan  2 18:29:35.981: INFO: stdout: ""
Jan  2 18:29:35.981: INFO: update-demo-nautilus-nkdhv is created but not running
Jan  2 18:29:40.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:41.164: INFO: stderr: ""
Jan  2 18:29:41.164: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
Jan  2 18:29:41.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkdhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:41.392: INFO: stderr: ""
Jan  2 18:29:41.392: INFO: stdout: ""
Jan  2 18:29:41.392: INFO: update-demo-nautilus-nkdhv is created but not running
Jan  2 18:29:46.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:46.718: INFO: stderr: ""
Jan  2 18:29:46.718: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
Jan  2 18:29:46.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkdhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:46.857: INFO: stderr: ""
Jan  2 18:29:46.857: INFO: stdout: "true"
Jan  2 18:29:46.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkdhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:47.031: INFO: stderr: ""
Jan  2 18:29:47.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 18:29:47.031: INFO: validating pod update-demo-nautilus-nkdhv
Jan  2 18:29:47.078: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 18:29:47.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 18:29:47.078: INFO: update-demo-nautilus-nkdhv is verified up and running
Jan  2 18:29:47.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:47.185: INFO: stderr: ""
Jan  2 18:29:47.185: INFO: stdout: "true"
Jan  2 18:29:47.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:47.319: INFO: stderr: ""
Jan  2 18:29:47.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 18:29:47.319: INFO: validating pod update-demo-nautilus-vcxxk
Jan  2 18:29:47.332: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 18:29:47.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 18:29:47.333: INFO: update-demo-nautilus-vcxxk is verified up and running
STEP: scaling down the replication controller
Jan  2 18:29:47.338: INFO: scanned /root for discovery docs: 
Jan  2 18:29:47.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:49.453: INFO: stderr: ""
Jan  2 18:29:49.453: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 18:29:49.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:49.638: INFO: stderr: ""
Jan  2 18:29:49.638: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 18:29:54.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:29:54.852: INFO: stderr: ""
Jan  2 18:29:54.852: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 18:29:59.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:00.062: INFO: stderr: ""
Jan  2 18:30:00.062: INFO: stdout: "update-demo-nautilus-nkdhv update-demo-nautilus-vcxxk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  2 18:30:05.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:05.282: INFO: stderr: ""
Jan  2 18:30:05.282: INFO: stdout: "update-demo-nautilus-vcxxk "
Jan  2 18:30:05.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:05.440: INFO: stderr: ""
Jan  2 18:30:05.440: INFO: stdout: "true"
Jan  2 18:30:05.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:05.577: INFO: stderr: ""
Jan  2 18:30:05.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 18:30:05.577: INFO: validating pod update-demo-nautilus-vcxxk
Jan  2 18:30:05.604: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 18:30:05.604: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 18:30:05.604: INFO: update-demo-nautilus-vcxxk is verified up and running
STEP: scaling up the replication controller
Jan  2 18:30:05.607: INFO: scanned /root for discovery docs: 
Jan  2 18:30:05.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:07.056: INFO: stderr: ""
Jan  2 18:30:07.057: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  2 18:30:07.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:07.214: INFO: stderr: ""
Jan  2 18:30:07.214: INFO: stdout: "update-demo-nautilus-nf4dm update-demo-nautilus-vcxxk "
Jan  2 18:30:07.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4dm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:07.305: INFO: stderr: ""
Jan  2 18:30:07.305: INFO: stdout: ""
Jan  2 18:30:07.305: INFO: update-demo-nautilus-nf4dm is created but not running
Jan  2 18:30:12.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:12.568: INFO: stderr: ""
Jan  2 18:30:12.569: INFO: stdout: "update-demo-nautilus-nf4dm update-demo-nautilus-vcxxk "
Jan  2 18:30:12.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4dm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:12.769: INFO: stderr: ""
Jan  2 18:30:12.769: INFO: stdout: ""
Jan  2 18:30:12.769: INFO: update-demo-nautilus-nf4dm is created but not running
Jan  2 18:30:17.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:17.989: INFO: stderr: ""
Jan  2 18:30:17.989: INFO: stdout: "update-demo-nautilus-nf4dm update-demo-nautilus-vcxxk "
Jan  2 18:30:17.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4dm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.145: INFO: stderr: ""
Jan  2 18:30:18.146: INFO: stdout: "true"
Jan  2 18:30:18.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nf4dm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.271: INFO: stderr: ""
Jan  2 18:30:18.272: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 18:30:18.272: INFO: validating pod update-demo-nautilus-nf4dm
Jan  2 18:30:18.281: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 18:30:18.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 18:30:18.281: INFO: update-demo-nautilus-nf4dm is verified up and running
Jan  2 18:30:18.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.385: INFO: stderr: ""
Jan  2 18:30:18.385: INFO: stdout: "true"
Jan  2 18:30:18.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcxxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.573: INFO: stderr: ""
Jan  2 18:30:18.574: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  2 18:30:18.574: INFO: validating pod update-demo-nautilus-vcxxk
Jan  2 18:30:18.608: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  2 18:30:18.608: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  2 18:30:18.608: INFO: update-demo-nautilus-vcxxk is verified up and running
STEP: using delete to clean up resources
Jan  2 18:30:18.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  2 18:30:18.757: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  2 18:30:18.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tg7zq'
Jan  2 18:30:18.916: INFO: stderr: "No resources found.\n"
Jan  2 18:30:18.916: INFO: stdout: ""
Jan  2 18:30:18.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tg7zq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  2 18:30:19.202: INFO: stderr: ""
Jan  2 18:30:19.202: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:30:19.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tg7zq" for this suite.
Jan  2 18:30:43.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:30:43.325: INFO: namespace: e2e-tests-kubectl-tg7zq, resource: bindings, ignored listing per whitelist
Jan  2 18:30:43.344: INFO: namespace e2e-tests-kubectl-tg7zq deletion completed in 24.006567421s

• [SLOW TEST:74.005 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:30:43.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fc6525b8-2d8d-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:30:43.665: INFO: Waiting up to 5m0s for pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005" in namespace "e2e-tests-secrets-qtlmw" to be "success or failure"
Jan  2 18:30:43.684: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.018635ms
Jan  2 18:30:45.966: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30043569s
Jan  2 18:30:47.980: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31410125s
Jan  2 18:30:50.391: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.72571699s
Jan  2 18:30:52.418: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752265187s
Jan  2 18:30:54.448: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.782304603s
STEP: Saw pod success
Jan  2 18:30:54.448: INFO: Pod "pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:30:54.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  2 18:30:54.772: INFO: Waiting for pod pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005 to disappear
Jan  2 18:30:54.832: INFO: Pod pod-secrets-fc66cbd1-2d8d-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:30:54.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qtlmw" for this suite.
Jan  2 18:31:00.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:31:01.074: INFO: namespace: e2e-tests-secrets-qtlmw, resource: bindings, ignored listing per whitelist
Jan  2 18:31:01.168: INFO: namespace e2e-tests-secrets-qtlmw deletion completed in 6.327803471s

• [SLOW TEST:17.823 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:31:01.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  2 18:31:01.385: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t4cgw,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4cgw/configmaps/e2e-watch-test-watch-closed,UID:06f7538a-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945395,Generation:0,CreationTimestamp:2020-01-02 18:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  2 18:31:01.386: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t4cgw,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4cgw/configmaps/e2e-watch-test-watch-closed,UID:06f7538a-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945396,Generation:0,CreationTimestamp:2020-01-02 18:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  2 18:31:01.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t4cgw,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4cgw/configmaps/e2e-watch-test-watch-closed,UID:06f7538a-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945397,Generation:0,CreationTimestamp:2020-01-02 18:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  2 18:31:01.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t4cgw,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4cgw/configmaps/e2e-watch-test-watch-closed,UID:06f7538a-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945398,Generation:0,CreationTimestamp:2020-01-02 18:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:31:01.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-t4cgw" for this suite.
Jan  2 18:31:07.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:31:07.525: INFO: namespace: e2e-tests-watch-t4cgw, resource: bindings, ignored listing per whitelist
Jan  2 18:31:07.654: INFO: namespace e2e-tests-watch-t4cgw deletion completed in 6.193608696s

• [SLOW TEST:6.486 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:31:07.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  2 18:31:34.035: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:34.035: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:34.515: INFO: Exec stderr: ""
Jan  2 18:31:34.516: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:34.516: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:35.121: INFO: Exec stderr: ""
Jan  2 18:31:35.121: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:35.122: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:35.431: INFO: Exec stderr: ""
Jan  2 18:31:35.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:35.432: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:35.899: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  2 18:31:35.900: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:35.900: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:36.225: INFO: Exec stderr: ""
Jan  2 18:31:36.226: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:36.226: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:36.632: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  2 18:31:36.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:36.633: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:36.972: INFO: Exec stderr: ""
Jan  2 18:31:36.973: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:36.973: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:37.404: INFO: Exec stderr: ""
Jan  2 18:31:37.404: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:37.404: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:37.723: INFO: Exec stderr: ""
Jan  2 18:31:37.723: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-ghrpv PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:31:37.723: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:31:38.028: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:31:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-ghrpv" for this suite.
Jan  2 18:32:44.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:32:44.244: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-ghrpv, resource: bindings, ignored listing per whitelist
Jan  2 18:32:44.458: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-ghrpv deletion completed in 1m6.402847881s

• [SLOW TEST:96.803 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:32:44.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:32:44.701: INFO: Creating ReplicaSet my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005
Jan  2 18:32:44.729: INFO: Pod name my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005: Found 0 pods out of 1
Jan  2 18:32:49.754: INFO: Pod name my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005: Found 1 pods out of 1
Jan  2 18:32:49.754: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005" is running
Jan  2 18:32:55.777: INFO: Pod "my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005-nbrvz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 18:32:45 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 18:32:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 18:32:45 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-02 18:32:44 +0000 UTC Reason: Message:}])
Jan  2 18:32:55.777: INFO: Trying to dial the pod
Jan  2 18:33:00.840: INFO: Controller my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005: Got expected result from replica 1 [my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005-nbrvz]: "my-hostname-basic-448e6c56-2d8e-11ea-b611-0242ac110005-nbrvz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:33:00.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-6kgp6" for this suite.
Jan  2 18:33:06.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:33:07.062: INFO: namespace: e2e-tests-replicaset-6kgp6, resource: bindings, ignored listing per whitelist
Jan  2 18:33:07.070: INFO: namespace e2e-tests-replicaset-6kgp6 deletion completed in 6.214423382s

• [SLOW TEST:22.611 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:33:07.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5200c183-2d8e-11ea-b611-0242ac110005
STEP: Creating secret with name s-test-opt-upd-5200c206-2d8e-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5200c183-2d8e-11ea-b611-0242ac110005
STEP: Updating secret s-test-opt-upd-5200c206-2d8e-11ea-b611-0242ac110005
STEP: Creating secret with name s-test-opt-create-5200c236-2d8e-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:33:25.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bjbwx" for this suite.
Jan  2 18:33:49.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:33:50.065: INFO: namespace: e2e-tests-projected-bjbwx, resource: bindings, ignored listing per whitelist
Jan  2 18:33:50.110: INFO: namespace e2e-tests-projected-bjbwx deletion completed in 24.262215803s

• [SLOW TEST:43.040 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:33:50.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  2 18:33:50.547: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  2 18:33:50.586: INFO: Waiting for terminating namespaces to be deleted...
Jan  2 18:33:50.605: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  2 18:33:50.689: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 18:33:50.689: INFO: 	Container coredns ready: true, restart count 0
Jan  2 18:33:50.689: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  2 18:33:50.689: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  2 18:33:50.689: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 18:33:50.689: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  2 18:33:50.689: INFO: 	Container weave ready: true, restart count 0
Jan  2 18:33:50.689: INFO: 	Container weave-npc ready: true, restart count 0
Jan  2 18:33:50.689: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  2 18:33:50.689: INFO: 	Container coredns ready: true, restart count 0
Jan  2 18:33:50.689: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 18:33:50.689: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  2 18:33:50.689: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e625928b1731af], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:33:51.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-m6kfn" for this suite.
Jan  2 18:33:57.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:33:57.869: INFO: namespace: e2e-tests-sched-pred-m6kfn, resource: bindings, ignored listing per whitelist
Jan  2 18:33:58.016: INFO: namespace e2e-tests-sched-pred-m6kfn deletion completed in 6.223520794s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.906 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:33:58.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:33:58.343: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  2 18:34:03.466: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 18:34:09.497: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  2 18:34:11.515: INFO: Creating deployment "test-rollover-deployment"
Jan  2 18:34:11.539: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  2 18:34:13.560: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  2 18:34:13.574: INFO: Ensure that both replica sets have 1 created replica
Jan  2 18:34:13.585: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  2 18:34:13.617: INFO: Updating deployment test-rollover-deployment
Jan  2 18:34:13.617: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  2 18:34:15.908: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  2 18:34:15.919: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  2 18:34:15.924: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:15.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:17.954: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:17.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:19.965: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:19.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:23.114: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:23.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:23.946: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:23.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:25.950: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:25.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586855, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:27.955: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:27.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586866, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:29.948: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:29.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586866, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:31.948: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:31.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586866, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:33.951: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:33.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586866, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:35.950: INFO: all replica sets need to contain the pod-template-hash label
Jan  2 18:34:35.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586866, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713586851, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:34:38.620: INFO: 
Jan  2 18:34:38.620: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 18:34:38.836: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-krhmk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-krhmk/deployments/test-rollover-deployment,UID:784daa8c-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945875,Generation:2,CreationTimestamp:2020-01-02 18:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 18:34:11 +0000 UTC 2020-01-02 18:34:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 18:34:36 +0000 UTC 2020-01-02 18:34:11 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 18:34:38.870: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-krhmk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-krhmk/replicasets/test-rollover-deployment-5b8479fdb6,UID:7990521c-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945866,Generation:2,CreationTimestamp:2020-01-02 18:34:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 784daa8c-2d8e-11ea-a994-fa163e34d433 0xc001b4a877 0xc001b4a878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 18:34:38.870: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  2 18:34:38.871: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-krhmk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-krhmk/replicasets/test-rollover-controller,UID:70664cf0-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945874,Generation:2,CreationTimestamp:2020-01-02 18:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 784daa8c-2d8e-11ea-a994-fa163e34d433 0xc001b4a587 0xc001b4a588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 18:34:38.876: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-krhmk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-krhmk/replicasets/test-rollover-deployment-58494b7559,UID:7856a87b-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945831,Generation:2,CreationTimestamp:2020-01-02 18:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 784daa8c-2d8e-11ea-a994-fa163e34d433 0xc001b4a7a7 0xc001b4a7a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 18:34:38.944: INFO: Pod "test-rollover-deployment-5b8479fdb6-87zxv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-87zxv,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-krhmk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-krhmk/pods/test-rollover-deployment-5b8479fdb6-87zxv,UID:7a47f783-2d8e-11ea-a994-fa163e34d433,ResourceVersion:16945851,Generation:0,CreationTimestamp:2020-01-02 18:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7990521c-2d8e-11ea-a994-fa163e34d433 0xc001b4b997 0xc001b4b998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jprms {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jprms,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jprms true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b4ba00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b4ba40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:34:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:34:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:34:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:34:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 18:34:15 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 18:34:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1ec79e8f6c8a5f0170524a0e7cf65c5c2a4e7e1a0334fdbf2aeaadf384a14b63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:34:38.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-krhmk" for this suite.
Jan  2 18:34:48.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:34:49.053: INFO: namespace: e2e-tests-deployment-krhmk, resource: bindings, ignored listing per whitelist
Jan  2 18:34:49.127: INFO: namespace e2e-tests-deployment-krhmk deletion completed in 10.166653896s

• [SLOW TEST:51.110 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:34:49.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  2 18:34:49.494: INFO: Number of nodes with available pods: 0
Jan  2 18:34:49.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:51.386: INFO: Number of nodes with available pods: 0
Jan  2 18:34:51.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:51.519: INFO: Number of nodes with available pods: 0
Jan  2 18:34:51.519: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:52.605: INFO: Number of nodes with available pods: 0
Jan  2 18:34:52.605: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:53.540: INFO: Number of nodes with available pods: 0
Jan  2 18:34:53.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:54.522: INFO: Number of nodes with available pods: 0
Jan  2 18:34:54.522: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:55.900: INFO: Number of nodes with available pods: 0
Jan  2 18:34:55.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:56.541: INFO: Number of nodes with available pods: 0
Jan  2 18:34:56.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:57.512: INFO: Number of nodes with available pods: 1
Jan  2 18:34:57.512: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  2 18:34:57.599: INFO: Number of nodes with available pods: 0
Jan  2 18:34:57.599: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:58.627: INFO: Number of nodes with available pods: 0
Jan  2 18:34:58.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:34:59.642: INFO: Number of nodes with available pods: 0
Jan  2 18:34:59.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:00.769: INFO: Number of nodes with available pods: 0
Jan  2 18:35:00.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:01.625: INFO: Number of nodes with available pods: 0
Jan  2 18:35:01.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:02.717: INFO: Number of nodes with available pods: 0
Jan  2 18:35:02.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:03.628: INFO: Number of nodes with available pods: 0
Jan  2 18:35:03.628: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:04.621: INFO: Number of nodes with available pods: 0
Jan  2 18:35:04.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:05.625: INFO: Number of nodes with available pods: 0
Jan  2 18:35:05.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:06.637: INFO: Number of nodes with available pods: 0
Jan  2 18:35:06.638: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:07.624: INFO: Number of nodes with available pods: 0
Jan  2 18:35:07.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:08.622: INFO: Number of nodes with available pods: 0
Jan  2 18:35:08.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:09.624: INFO: Number of nodes with available pods: 0
Jan  2 18:35:09.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:10.680: INFO: Number of nodes with available pods: 0
Jan  2 18:35:10.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:11.624: INFO: Number of nodes with available pods: 0
Jan  2 18:35:11.624: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:12.680: INFO: Number of nodes with available pods: 0
Jan  2 18:35:12.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:13.827: INFO: Number of nodes with available pods: 0
Jan  2 18:35:13.827: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:14.716: INFO: Number of nodes with available pods: 0
Jan  2 18:35:14.716: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:15.645: INFO: Number of nodes with available pods: 0
Jan  2 18:35:15.645: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:16.623: INFO: Number of nodes with available pods: 0
Jan  2 18:35:16.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:17.633: INFO: Number of nodes with available pods: 0
Jan  2 18:35:17.633: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:18.736: INFO: Number of nodes with available pods: 0
Jan  2 18:35:18.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:19.759: INFO: Number of nodes with available pods: 0
Jan  2 18:35:19.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:20.664: INFO: Number of nodes with available pods: 0
Jan  2 18:35:20.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:21.625: INFO: Number of nodes with available pods: 0
Jan  2 18:35:21.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:35:22.620: INFO: Number of nodes with available pods: 1
Jan  2 18:35:22.620: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-749bv, will wait for the garbage collector to delete the pods
Jan  2 18:35:22.697: INFO: Deleting DaemonSet.extensions daemon-set took: 16.791329ms
Jan  2 18:35:22.798: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.749226ms
Jan  2 18:35:32.653: INFO: Number of nodes with available pods: 0
Jan  2 18:35:32.653: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 18:35:32.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-749bv/daemonsets","resourceVersion":"16946015"},"items":null}

Jan  2 18:35:32.665: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-749bv/pods","resourceVersion":"16946015"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:35:32.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-749bv" for this suite.
Jan  2 18:35:40.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:35:40.859: INFO: namespace: e2e-tests-daemonsets-749bv, resource: bindings, ignored listing per whitelist
Jan  2 18:35:40.872: INFO: namespace e2e-tests-daemonsets-749bv deletion completed in 8.185428577s

• [SLOW TEST:51.745 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:35:40.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  2 18:35:41.074: INFO: Waiting up to 5m0s for pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005" in namespace "e2e-tests-containers-vcfr9" to be "success or failure"
Jan  2 18:35:41.088: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.252018ms
Jan  2 18:35:43.106: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031726418s
Jan  2 18:35:45.120: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045707124s
Jan  2 18:35:47.143: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069267084s
Jan  2 18:35:49.156: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082595159s
Jan  2 18:35:51.180: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106259135s
STEP: Saw pod success
Jan  2 18:35:51.180: INFO: Pod "client-containers-ada644ea-2d8e-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:35:51.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ada644ea-2d8e-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:35:51.320: INFO: Waiting for pod client-containers-ada644ea-2d8e-11ea-b611-0242ac110005 to disappear
Jan  2 18:35:51.328: INFO: Pod client-containers-ada644ea-2d8e-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:35:51.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vcfr9" for this suite.
Jan  2 18:35:57.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:35:57.595: INFO: namespace: e2e-tests-containers-vcfr9, resource: bindings, ignored listing per whitelist
Jan  2 18:35:57.635: INFO: namespace e2e-tests-containers-vcfr9 deletion completed in 6.299084702s

• [SLOW TEST:16.762 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:35:57.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b7b74b86-2d8e-11ea-b611-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b7b74b86-2d8e-11ea-b611-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:37:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f7ld5" for this suite.
Jan  2 18:37:49.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:37:49.666: INFO: namespace: e2e-tests-projected-f7ld5, resource: bindings, ignored listing per whitelist
Jan  2 18:37:49.909: INFO: namespace e2e-tests-projected-f7ld5 deletion completed in 24.430235372s

• [SLOW TEST:112.274 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:37:49.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-lcpqr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lcpqr to expose endpoints map[]
Jan  2 18:37:50.221: INFO: Get endpoints failed (38.375182ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  2 18:37:51.281: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lcpqr exposes endpoints map[] (1.098487508s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-lcpqr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lcpqr to expose endpoints map[pod1:[100]]
Jan  2 18:37:56.057: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.743825529s elapsed, will retry)
Jan  2 18:38:00.418: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lcpqr exposes endpoints map[pod1:[100]] (9.104669652s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-lcpqr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lcpqr to expose endpoints map[pod1:[100] pod2:[101]]
Jan  2 18:38:06.981: INFO: Unexpected endpoints: found map[fb4e33d1-2d8e-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (6.538869572s elapsed, will retry)
Jan  2 18:38:10.174: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lcpqr exposes endpoints map[pod1:[100] pod2:[101]] (9.731249161s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-lcpqr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lcpqr to expose endpoints map[pod2:[101]]
Jan  2 18:38:10.238: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lcpqr exposes endpoints map[pod2:[101]] (34.157347ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-lcpqr
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lcpqr to expose endpoints map[]
Jan  2 18:38:11.592: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lcpqr exposes endpoints map[] (1.230055669s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:38:11.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-lcpqr" for this suite.
Jan  2 18:38:36.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:38:36.184: INFO: namespace: e2e-tests-services-lcpqr, resource: bindings, ignored listing per whitelist
Jan  2 18:38:36.191: INFO: namespace e2e-tests-services-lcpqr deletion completed in 24.288329002s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.281 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:38:36.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-16366b22-2d8f-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:38:36.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-9vvpq" to be "success or failure"
Jan  2 18:38:36.496: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.628379ms
Jan  2 18:38:38.737: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263122624s
Jan  2 18:38:40.757: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283828607s
Jan  2 18:38:42.849: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375933376s
Jan  2 18:38:44.873: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399707957s
Jan  2 18:38:46.895: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421472044s
STEP: Saw pod success
Jan  2 18:38:46.895: INFO: Pod "pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:38:48.319: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 18:38:48.885: INFO: Waiting for pod pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005 to disappear
Jan  2 18:38:48.900: INFO: Pod pod-configmaps-1637faa9-2d8f-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:38:48.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9vvpq" for this suite.
Jan  2 18:38:55.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:38:55.052: INFO: namespace: e2e-tests-configmap-9vvpq, resource: bindings, ignored listing per whitelist
Jan  2 18:38:55.248: INFO: namespace e2e-tests-configmap-9vvpq deletion completed in 6.270729114s

• [SLOW TEST:19.057 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:38:55.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:39:55.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jvzkm" for this suite.
Jan  2 18:40:19.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:40:19.906: INFO: namespace: e2e-tests-container-probe-jvzkm, resource: bindings, ignored listing per whitelist
Jan  2 18:40:19.952: INFO: namespace e2e-tests-container-probe-jvzkm deletion completed in 24.395518918s

• [SLOW TEST:84.703 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:40:19.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:40:20.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:40:31.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d6vsv" for this suite.
Jan  2 18:41:25.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:41:25.291: INFO: namespace: e2e-tests-pods-d6vsv, resource: bindings, ignored listing per whitelist
Jan  2 18:41:25.373: INFO: namespace e2e-tests-pods-d6vsv deletion completed in 54.305159187s

• [SLOW TEST:65.420 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:41:25.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  2 18:41:25.665: INFO: Waiting up to 5m0s for pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-rnlhc" to be "success or failure"
Jan  2 18:41:25.729: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.117362ms
Jan  2 18:41:27.748: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083711769s
Jan  2 18:41:29.767: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102407236s
Jan  2 18:41:32.155: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490020477s
Jan  2 18:41:34.170: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505678848s
Jan  2 18:41:36.183: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.518657363s
STEP: Saw pod success
Jan  2 18:41:36.183: INFO: Pod "pod-7b0b5b42-2d8f-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:41:36.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7b0b5b42-2d8f-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:41:36.314: INFO: Waiting for pod pod-7b0b5b42-2d8f-11ea-b611-0242ac110005 to disappear
Jan  2 18:41:36.362: INFO: Pod pod-7b0b5b42-2d8f-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:41:36.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rnlhc" for this suite.
Jan  2 18:41:42.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:41:43.076: INFO: namespace: e2e-tests-emptydir-rnlhc, resource: bindings, ignored listing per whitelist
Jan  2 18:41:43.077: INFO: namespace e2e-tests-emptydir-rnlhc deletion completed in 6.695850034s

• [SLOW TEST:17.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:41:43.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-rhbnm
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-rhbnm
STEP: Deleting pre-stop pod
Jan  2 18:42:08.458: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:42:08.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-rhbnm" for this suite.
Jan  2 18:42:48.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:42:48.866: INFO: namespace: e2e-tests-prestop-rhbnm, resource: bindings, ignored listing per whitelist
Jan  2 18:42:48.921: INFO: namespace e2e-tests-prestop-rhbnm deletion completed in 40.38365425s

• [SLOW TEST:65.845 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:42:48.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 18:42:49.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:42:51.212: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 18:42:51.213: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  2 18:42:51.232: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  2 18:42:51.286: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  2 18:42:51.331: INFO: scanned /root for discovery docs: 
Jan  2 18:42:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:43:18.657: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  2 18:43:18.658: INFO: stdout: "Created e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad\nScaling up e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  2 18:43:18.679: INFO: stdout: "Created e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad\nScaling up e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  2 18:43:18.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:43:18.904: INFO: stderr: ""
Jan  2 18:43:18.905: INFO: stdout: "e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad-5d5w7 "
Jan  2 18:43:18.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad-5d5w7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:43:19.047: INFO: stderr: ""
Jan  2 18:43:19.047: INFO: stdout: "true"
Jan  2 18:43:19.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad-5d5w7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:43:19.187: INFO: stderr: ""
Jan  2 18:43:19.187: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  2 18:43:19.187: INFO: e2e-test-nginx-rc-faa949118a538f12f7063ac78568dbad-5d5w7 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  2 18:43:19.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vzsxx'
Jan  2 18:43:19.329: INFO: stderr: ""
Jan  2 18:43:19.329: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:43:19.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vzsxx" for this suite.
Jan  2 18:43:27.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:43:27.769: INFO: namespace: e2e-tests-kubectl-vzsxx, resource: bindings, ignored listing per whitelist
Jan  2 18:43:27.784: INFO: namespace e2e-tests-kubectl-vzsxx deletion completed in 8.429070294s

• [SLOW TEST:38.861 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:43:27.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c3fbe3ae-2d8f-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:43:28.092: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-qzrvg" to be "success or failure"
Jan  2 18:43:28.107: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.69367ms
Jan  2 18:43:30.144: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05201501s
Jan  2 18:43:32.201: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109174193s
Jan  2 18:43:34.526: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434259473s
Jan  2 18:43:36.558: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.465693156s
Jan  2 18:43:38.590: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.497329305s
STEP: Saw pod success
Jan  2 18:43:38.590: INFO: Pod "pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:43:38.598: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 18:43:38.709: INFO: Waiting for pod pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005 to disappear
Jan  2 18:43:38.723: INFO: Pod pod-configmaps-c3fd809c-2d8f-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:43:38.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qzrvg" for this suite.
Jan  2 18:43:44.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:43:45.024: INFO: namespace: e2e-tests-configmap-qzrvg, resource: bindings, ignored listing per whitelist
Jan  2 18:43:45.044: INFO: namespace e2e-tests-configmap-qzrvg deletion completed in 6.231374697s

• [SLOW TEST:17.260 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:43:45.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  2 18:43:45.405: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  2 18:43:50.432: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:43:50.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-gftqr" for this suite.
Jan  2 18:43:58.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:43:58.925: INFO: namespace: e2e-tests-replication-controller-gftqr, resource: bindings, ignored listing per whitelist
Jan  2 18:43:59.007: INFO: namespace e2e-tests-replication-controller-gftqr deletion completed in 8.313411897s

• [SLOW TEST:13.962 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:43:59.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  2 18:43:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-hpxdl run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  2 18:44:10.423: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan  2 18:44:10.424: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:44:12.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hpxdl" for this suite.
Jan  2 18:44:18.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:44:18.966: INFO: namespace: e2e-tests-kubectl-hpxdl, resource: bindings, ignored listing per whitelist
Jan  2 18:44:19.010: INFO: namespace e2e-tests-kubectl-hpxdl deletion completed in 6.411802185s

• [SLOW TEST:20.001 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:44:19.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:44:19.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ltgkx" for this suite.
Jan  2 18:44:37.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:44:37.547: INFO: namespace: e2e-tests-pods-ltgkx, resource: bindings, ignored listing per whitelist
Jan  2 18:44:37.581: INFO: namespace e2e-tests-pods-ltgkx deletion completed in 18.337053566s

• [SLOW TEST:18.570 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:44:37.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  2 18:44:58.087: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:44:58.133: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:00.135: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:00.156: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:02.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:02.155: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:04.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:04.150: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:06.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:06.151: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:08.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:08.165: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:10.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:10.149: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:12.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:12.156: INFO: Pod pod-with-prestop-http-hook still exists
Jan  2 18:45:14.134: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  2 18:45:14.153: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:45:14.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cgbrn" for this suite.
Jan  2 18:45:38.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:45:38.487: INFO: namespace: e2e-tests-container-lifecycle-hook-cgbrn, resource: bindings, ignored listing per whitelist
Jan  2 18:45:38.600: INFO: namespace e2e-tests-container-lifecycle-hook-cgbrn deletion completed in 24.416536471s

• [SLOW TEST:61.020 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:45:38.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:45:38.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-2htnw" to be "success or failure"
Jan  2 18:45:38.889: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.434756ms
Jan  2 18:45:40.946: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068271456s
Jan  2 18:45:42.993: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114402451s
Jan  2 18:45:45.517: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638944292s
Jan  2 18:45:47.534: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65616675s
Jan  2 18:45:49.550: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671690211s
STEP: Saw pod success
Jan  2 18:45:49.550: INFO: Pod "downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:45:49.555: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:45:50.107: INFO: Waiting for pod downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:45:50.637: INFO: Pod downwardapi-volume-11fc0202-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:45:50.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2htnw" for this suite.
Jan  2 18:45:56.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:45:57.015: INFO: namespace: e2e-tests-projected-2htnw, resource: bindings, ignored listing per whitelist
Jan  2 18:45:57.113: INFO: namespace e2e-tests-projected-2htnw deletion completed in 6.444195237s

• [SLOW TEST:18.512 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:45:57.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-l4cx7
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  2 18:45:57.260: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  2 18:46:33.511: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-l4cx7 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  2 18:46:33.511: INFO: >>> kubeConfig: /root/.kube/config
Jan  2 18:46:34.145: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:46:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-l4cx7" for this suite.
Jan  2 18:47:00.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:47:00.356: INFO: namespace: e2e-tests-pod-network-test-l4cx7, resource: bindings, ignored listing per whitelist
Jan  2 18:47:00.392: INFO: namespace e2e-tests-pod-network-test-l4cx7 deletion completed in 26.224135684s

• [SLOW TEST:63.278 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:47:00.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-ftv5
STEP: Creating a pod to test atomic-volume-subpath
Jan  2 18:47:00.933: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ftv5" in namespace "e2e-tests-subpath-hzgss" to be "success or failure"
Jan  2 18:47:00.962: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.987458ms
Jan  2 18:47:03.308: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374499902s
Jan  2 18:47:05.336: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40276656s
Jan  2 18:47:07.936: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002678448s
Jan  2 18:47:09.955: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021104918s
Jan  2 18:47:11.970: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.036712237s
Jan  2 18:47:13.999: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.065315432s
Jan  2 18:47:16.086: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=true. Elapsed: 15.152744599s
Jan  2 18:47:18.096: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 17.162773279s
Jan  2 18:47:20.116: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 19.182420477s
Jan  2 18:47:22.146: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 21.212125832s
Jan  2 18:47:24.158: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 23.224436138s
Jan  2 18:47:26.177: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 25.243583347s
Jan  2 18:47:28.202: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 27.268236855s
Jan  2 18:47:30.224: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 29.290405247s
Jan  2 18:47:32.259: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 31.325162581s
Jan  2 18:47:34.293: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Running", Reason="", readiness=false. Elapsed: 33.359342749s
Jan  2 18:47:36.313: INFO: Pod "pod-subpath-test-projected-ftv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.379850796s
STEP: Saw pod success
Jan  2 18:47:36.313: INFO: Pod "pod-subpath-test-projected-ftv5" satisfied condition "success or failure"
Jan  2 18:47:36.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-ftv5 container test-container-subpath-projected-ftv5: 
STEP: delete the pod
Jan  2 18:47:37.060: INFO: Waiting for pod pod-subpath-test-projected-ftv5 to disappear
Jan  2 18:47:37.497: INFO: Pod pod-subpath-test-projected-ftv5 no longer exists
STEP: Deleting pod pod-subpath-test-projected-ftv5
Jan  2 18:47:37.498: INFO: Deleting pod "pod-subpath-test-projected-ftv5" in namespace "e2e-tests-subpath-hzgss"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:47:37.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hzgss" for this suite.
Jan  2 18:47:45.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:47:45.622: INFO: namespace: e2e-tests-subpath-hzgss, resource: bindings, ignored listing per whitelist
Jan  2 18:47:45.724: INFO: namespace e2e-tests-subpath-hzgss deletion completed in 8.209283363s

• [SLOW TEST:45.332 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:47:45.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:47:46.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-8qgz7" to be "success or failure"
Jan  2 18:47:46.075: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.730551ms
Jan  2 18:47:48.199: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169221003s
Jan  2 18:47:50.220: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189677299s
Jan  2 18:47:52.390: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.360371456s
Jan  2 18:47:54.406: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376300515s
Jan  2 18:47:56.936: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.90612347s
STEP: Saw pod success
Jan  2 18:47:56.936: INFO: Pod "downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:47:56.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:47:57.423: INFO: Waiting for pod downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:47:57.549: INFO: Pod downwardapi-volume-5dbaa820-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:47:57.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8qgz7" for this suite.
Jan  2 18:48:03.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:48:03.766: INFO: namespace: e2e-tests-downward-api-8qgz7, resource: bindings, ignored listing per whitelist
Jan  2 18:48:03.822: INFO: namespace e2e-tests-downward-api-8qgz7 deletion completed in 6.263158855s

• [SLOW TEST:18.097 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:48:03.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  2 18:48:04.139: INFO: Waiting up to 5m0s for pod "pod-689186e1-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-5xtlb" to be "success or failure"
Jan  2 18:48:04.160: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.381243ms
Jan  2 18:48:06.175: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035438616s
Jan  2 18:48:08.196: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057126072s
Jan  2 18:48:10.507: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367499568s
Jan  2 18:48:12.787: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.64730902s
Jan  2 18:48:14.814: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.674891352s
STEP: Saw pod success
Jan  2 18:48:14.814: INFO: Pod "pod-689186e1-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:48:14.822: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-689186e1-2d90-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:48:14.912: INFO: Waiting for pod pod-689186e1-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:48:14.930: INFO: Pod pod-689186e1-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:48:14.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5xtlb" for this suite.
Jan  2 18:48:21.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:48:21.246: INFO: namespace: e2e-tests-emptydir-5xtlb, resource: bindings, ignored listing per whitelist
Jan  2 18:48:21.328: INFO: namespace e2e-tests-emptydir-5xtlb deletion completed in 6.390919069s

• [SLOW TEST:17.505 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:48:21.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 18:48:21.648: INFO: Waiting up to 5m0s for pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-hr48r" to be "success or failure"
Jan  2 18:48:21.668: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.316712ms
Jan  2 18:48:23.688: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039910724s
Jan  2 18:48:25.712: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063937817s
Jan  2 18:48:27.823: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174935077s
Jan  2 18:48:29.859: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210937699s
Jan  2 18:48:32.015: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.366743079s
Jan  2 18:48:34.047: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.398774818s
STEP: Saw pod success
Jan  2 18:48:34.047: INFO: Pod "downward-api-72ffd68a-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:48:34.053: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-72ffd68a-2d90-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:48:35.298: INFO: Waiting for pod downward-api-72ffd68a-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:48:35.731: INFO: Pod downward-api-72ffd68a-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:48:35.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hr48r" for this suite.
Jan  2 18:48:42.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:48:42.145: INFO: namespace: e2e-tests-downward-api-hr48r, resource: bindings, ignored listing per whitelist
Jan  2 18:48:42.257: INFO: namespace e2e-tests-downward-api-hr48r deletion completed in 6.290160084s

• [SLOW TEST:20.928 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:48:42.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  2 18:48:42.426: INFO: Waiting up to 5m0s for pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-var-expansion-mjqpx" to be "success or failure"
Jan  2 18:48:42.441: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.513205ms
Jan  2 18:48:44.470: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043479057s
Jan  2 18:48:46.489: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0622215s
Jan  2 18:48:49.172: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.745684974s
Jan  2 18:48:51.195: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768306955s
Jan  2 18:48:53.235: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.808837685s
Jan  2 18:48:55.285: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.858866059s
STEP: Saw pod success
Jan  2 18:48:55.286: INFO: Pod "var-expansion-7f650260-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:48:55.379: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7f650260-2d90-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:48:55.621: INFO: Waiting for pod var-expansion-7f650260-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:48:55.645: INFO: Pod var-expansion-7f650260-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:48:55.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-mjqpx" for this suite.
Jan  2 18:49:01.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:49:01.983: INFO: namespace: e2e-tests-var-expansion-mjqpx, resource: bindings, ignored listing per whitelist
Jan  2 18:49:01.999: INFO: namespace e2e-tests-var-expansion-mjqpx deletion completed in 6.342351647s

• [SLOW TEST:19.743 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:49:02.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:49:12.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-cx8rt" for this suite.
Jan  2 18:49:54.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:49:54.456: INFO: namespace: e2e-tests-kubelet-test-cx8rt, resource: bindings, ignored listing per whitelist
Jan  2 18:49:54.666: INFO: namespace e2e-tests-kubelet-test-cx8rt deletion completed in 42.344675287s

• [SLOW TEST:52.666 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:49:54.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  2 18:49:54.919: INFO: Waiting up to 5m0s for pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-4xmp9" to be "success or failure"
Jan  2 18:49:54.998: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.7675ms
Jan  2 18:49:57.011: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09242103s
Jan  2 18:49:59.028: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10954919s
Jan  2 18:50:01.875: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956316751s
Jan  2 18:50:03.905: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.986292986s
Jan  2 18:50:05.937: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.018426338s
STEP: Saw pod success
Jan  2 18:50:05.937: INFO: Pod "downward-api-aa9751f5-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:50:05.945: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-aa9751f5-2d90-11ea-b611-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  2 18:50:07.177: INFO: Waiting for pod downward-api-aa9751f5-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:50:07.262: INFO: Pod downward-api-aa9751f5-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:50:07.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4xmp9" for this suite.
Jan  2 18:50:13.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:50:14.032: INFO: namespace: e2e-tests-downward-api-4xmp9, resource: bindings, ignored listing per whitelist
Jan  2 18:50:14.099: INFO: namespace e2e-tests-downward-api-4xmp9 deletion completed in 6.825117294s

• [SLOW TEST:19.432 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:50:14.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:50:14.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-fdfxc" to be "success or failure"
Jan  2 18:50:14.518: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 155.821755ms
Jan  2 18:50:16.553: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190059309s
Jan  2 18:50:18.578: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215818335s
Jan  2 18:50:20.647: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284289379s
Jan  2 18:50:22.683: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.320505684s
Jan  2 18:50:24.702: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339918621s
STEP: Saw pod success
Jan  2 18:50:24.703: INFO: Pod "downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:50:24.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:50:24.827: INFO: Waiting for pod downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005 to disappear
Jan  2 18:50:24.892: INFO: Pod downwardapi-volume-b630ed73-2d90-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:50:24.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fdfxc" for this suite.
Jan  2 18:50:30.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:50:31.024: INFO: namespace: e2e-tests-projected-fdfxc, resource: bindings, ignored listing per whitelist
Jan  2 18:50:31.314: INFO: namespace e2e-tests-projected-fdfxc deletion completed in 6.377160212s

• [SLOW TEST:17.214 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:50:31.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xvzcr
Jan  2 18:50:41.728: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xvzcr
STEP: checking the pod's current state and verifying that restartCount is present
Jan  2 18:50:41.735: INFO: Initial restart count of pod liveness-http is 0
Jan  2 18:51:10.262: INFO: Restart count of pod e2e-tests-container-probe-xvzcr/liveness-http is now 1 (28.52688555s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:51:10.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-xvzcr" for this suite.
Jan  2 18:51:16.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:51:16.663: INFO: namespace: e2e-tests-container-probe-xvzcr, resource: bindings, ignored listing per whitelist
Jan  2 18:51:16.699: INFO: namespace e2e-tests-container-probe-xvzcr deletion completed in 6.32987534s

• [SLOW TEST:45.384 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:51:16.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  2 18:51:16.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mls6p'
Jan  2 18:51:17.407: INFO: stderr: ""
Jan  2 18:51:17.407: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  2 18:51:18.420: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:18.420: INFO: Found 0 / 1
Jan  2 18:51:19.456: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:19.456: INFO: Found 0 / 1
Jan  2 18:51:20.419: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:20.420: INFO: Found 0 / 1
Jan  2 18:51:21.430: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:21.430: INFO: Found 0 / 1
Jan  2 18:51:23.230: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:23.231: INFO: Found 0 / 1
Jan  2 18:51:23.766: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:23.766: INFO: Found 0 / 1
Jan  2 18:51:24.503: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:24.503: INFO: Found 0 / 1
Jan  2 18:51:25.428: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:25.428: INFO: Found 0 / 1
Jan  2 18:51:26.423: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:26.423: INFO: Found 0 / 1
Jan  2 18:51:27.427: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:27.427: INFO: Found 1 / 1
Jan  2 18:51:27.427: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  2 18:51:27.437: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:27.437: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  2 18:51:27.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jrvq4 --namespace=e2e-tests-kubectl-mls6p -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  2 18:51:27.655: INFO: stderr: ""
Jan  2 18:51:27.655: INFO: stdout: "pod/redis-master-jrvq4 patched\n"
STEP: checking annotations
Jan  2 18:51:27.668: INFO: Selector matched 1 pods for map[app:redis]
Jan  2 18:51:27.668: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:51:27.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mls6p" for this suite.
Jan  2 18:51:51.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:51:51.883: INFO: namespace: e2e-tests-kubectl-mls6p, resource: bindings, ignored listing per whitelist
Jan  2 18:51:52.155: INFO: namespace e2e-tests-kubectl-mls6p deletion completed in 24.480766932s

• [SLOW TEST:35.456 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:51:52.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:51:52.452: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  2 18:51:52.477: INFO: Number of nodes with available pods: 0
Jan  2 18:51:52.477: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  2 18:51:52.736: INFO: Number of nodes with available pods: 0
Jan  2 18:51:52.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:53.752: INFO: Number of nodes with available pods: 0
Jan  2 18:51:53.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:55.117: INFO: Number of nodes with available pods: 0
Jan  2 18:51:55.117: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:55.753: INFO: Number of nodes with available pods: 0
Jan  2 18:51:55.753: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:56.766: INFO: Number of nodes with available pods: 0
Jan  2 18:51:56.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:58.721: INFO: Number of nodes with available pods: 0
Jan  2 18:51:58.721: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:51:59.205: INFO: Number of nodes with available pods: 0
Jan  2 18:51:59.205: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:00.351: INFO: Number of nodes with available pods: 0
Jan  2 18:52:00.352: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:00.756: INFO: Number of nodes with available pods: 0
Jan  2 18:52:00.756: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:01.779: INFO: Number of nodes with available pods: 1
Jan  2 18:52:01.779: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  2 18:52:01.859: INFO: Number of nodes with available pods: 1
Jan  2 18:52:01.859: INFO: Number of running nodes: 0, number of available pods: 1
Jan  2 18:52:02.915: INFO: Number of nodes with available pods: 0
Jan  2 18:52:02.915: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  2 18:52:02.959: INFO: Number of nodes with available pods: 0
Jan  2 18:52:02.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:04.023: INFO: Number of nodes with available pods: 0
Jan  2 18:52:04.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:05.015: INFO: Number of nodes with available pods: 0
Jan  2 18:52:05.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:05.973: INFO: Number of nodes with available pods: 0
Jan  2 18:52:05.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:07.268: INFO: Number of nodes with available pods: 0
Jan  2 18:52:07.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:07.974: INFO: Number of nodes with available pods: 0
Jan  2 18:52:07.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:08.997: INFO: Number of nodes with available pods: 0
Jan  2 18:52:08.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:09.974: INFO: Number of nodes with available pods: 0
Jan  2 18:52:09.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:10.970: INFO: Number of nodes with available pods: 0
Jan  2 18:52:10.970: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:11.973: INFO: Number of nodes with available pods: 0
Jan  2 18:52:11.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:12.981: INFO: Number of nodes with available pods: 0
Jan  2 18:52:12.982: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:14.779: INFO: Number of nodes with available pods: 0
Jan  2 18:52:14.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:15.209: INFO: Number of nodes with available pods: 0
Jan  2 18:52:15.209: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:15.979: INFO: Number of nodes with available pods: 0
Jan  2 18:52:15.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:16.978: INFO: Number of nodes with available pods: 0
Jan  2 18:52:16.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:18.265: INFO: Number of nodes with available pods: 0
Jan  2 18:52:18.265: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:18.977: INFO: Number of nodes with available pods: 0
Jan  2 18:52:18.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:19.978: INFO: Number of nodes with available pods: 0
Jan  2 18:52:19.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:20.974: INFO: Number of nodes with available pods: 0
Jan  2 18:52:20.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:21.975: INFO: Number of nodes with available pods: 0
Jan  2 18:52:21.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  2 18:52:22.975: INFO: Number of nodes with available pods: 1
Jan  2 18:52:22.975: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2hfm9, will wait for the garbage collector to delete the pods
Jan  2 18:52:23.061: INFO: Deleting DaemonSet.extensions daemon-set took: 18.44675ms
Jan  2 18:52:23.262: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.860844ms
Jan  2 18:52:32.721: INFO: Number of nodes with available pods: 0
Jan  2 18:52:32.721: INFO: Number of running nodes: 0, number of available pods: 0
Jan  2 18:52:32.761: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2hfm9/daemonsets","resourceVersion":"16948133"},"items":null}

Jan  2 18:52:32.766: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2hfm9/pods","resourceVersion":"16948133"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:52:32.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2hfm9" for this suite.
Jan  2 18:52:38.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:52:39.050: INFO: namespace: e2e-tests-daemonsets-2hfm9, resource: bindings, ignored listing per whitelist
Jan  2 18:52:39.124: INFO: namespace e2e-tests-daemonsets-2hfm9 deletion completed in 6.313220077s

• [SLOW TEST:46.968 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:52:39.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0c995044-2d91-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:52:39.333: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-752qk" to be "success or failure"
Jan  2 18:52:39.344: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.434154ms
Jan  2 18:52:42.192: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.858489057s
Jan  2 18:52:44.258: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.924958717s
Jan  2 18:52:46.453: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.119804686s
Jan  2 18:52:48.474: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.141138625s
Jan  2 18:52:50.501: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.167853873s
STEP: Saw pod success
Jan  2 18:52:50.501: INFO: Pod "pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:52:50.523: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 18:52:51.549: INFO: Waiting for pod pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:52:51.567: INFO: Pod pod-projected-configmaps-0c9a4cd2-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:52:51.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-752qk" for this suite.
Jan  2 18:52:57.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:52:57.976: INFO: namespace: e2e-tests-projected-752qk, resource: bindings, ignored listing per whitelist
Jan  2 18:52:58.044: INFO: namespace e2e-tests-projected-752qk deletion completed in 6.273705844s

• [SLOW TEST:18.920 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:52:58.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:52:58.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-xwgkk" for this suite.
Jan  2 18:53:04.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:53:04.516: INFO: namespace: e2e-tests-services-xwgkk, resource: bindings, ignored listing per whitelist
Jan  2 18:53:04.679: INFO: namespace e2e-tests-services-xwgkk deletion completed in 6.39202191s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.634 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:53:04.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  2 18:53:04.842: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:53:04.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4n8nw" for this suite.
Jan  2 18:53:11.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:53:11.064: INFO: namespace: e2e-tests-kubectl-4n8nw, resource: bindings, ignored listing per whitelist
Jan  2 18:53:11.172: INFO: namespace e2e-tests-kubectl-4n8nw deletion completed in 6.203879562s

• [SLOW TEST:6.493 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:53:11.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  2 18:53:11.951: INFO: Waiting up to 5m0s for pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f" in namespace "e2e-tests-svcaccounts-kjpn5" to be "success or failure"
Jan  2 18:53:12.026: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 74.871799ms
Jan  2 18:53:14.495: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543514246s
Jan  2 18:53:16.529: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.577928973s
Jan  2 18:53:18.567: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616228153s
Jan  2 18:53:20.621: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670340426s
Jan  2 18:53:22.629: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67761746s
Jan  2 18:53:25.301: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.349765604s
Jan  2 18:53:27.311: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.359732576s
Jan  2 18:53:29.325: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.373890902s
STEP: Saw pod success
Jan  2 18:53:29.325: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f" satisfied condition "success or failure"
Jan  2 18:53:29.329: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f container token-test: 
STEP: delete the pod
Jan  2 18:53:29.471: INFO: Waiting for pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f to disappear
Jan  2 18:53:29.483: INFO: Pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-7qb5f no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  2 18:53:29.499: INFO: Waiting up to 5m0s for pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v" in namespace "e2e-tests-svcaccounts-kjpn5" to be "success or failure"
Jan  2 18:53:31.568: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06867498s
Jan  2 18:53:33.594: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094356354s
Jan  2 18:53:35.616: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117178481s
Jan  2 18:53:37.834: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335031942s
Jan  2 18:53:39.884: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.384400726s
Jan  2 18:53:41.900: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.400745339s
Jan  2 18:53:45.299: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 15.799401418s
Jan  2 18:53:47.310: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Pending", Reason="", readiness=false. Elapsed: 17.810273627s
Jan  2 18:53:49.332: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.832968493s
STEP: Saw pod success
Jan  2 18:53:49.332: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v" satisfied condition "success or failure"
Jan  2 18:53:49.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v container root-ca-test: 
STEP: delete the pod
Jan  2 18:53:50.235: INFO: Waiting for pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v to disappear
Jan  2 18:53:50.272: INFO: Pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-pgp4v no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  2 18:53:50.390: INFO: Waiting up to 5m0s for pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87" in namespace "e2e-tests-svcaccounts-kjpn5" to be "success or failure"
Jan  2 18:53:50.404: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 13.957303ms
Jan  2 18:53:52.651: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260548861s
Jan  2 18:53:54.673: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282493172s
Jan  2 18:53:56.962: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.571934925s
Jan  2 18:53:58.997: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607178589s
Jan  2 18:54:01.029: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638214798s
Jan  2 18:54:03.041: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.650533602s
Jan  2 18:54:05.053: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Pending", Reason="", readiness=false. Elapsed: 14.662972913s
Jan  2 18:54:07.066: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.675377567s
STEP: Saw pod success
Jan  2 18:54:07.066: INFO: Pod "pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87" satisfied condition "success or failure"
Jan  2 18:54:07.069: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87 container namespace-test: 
STEP: delete the pod
Jan  2 18:54:08.773: INFO: Waiting for pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87 to disappear
Jan  2 18:54:08.817: INFO: Pod pod-service-account-2009f472-2d91-11ea-b611-0242ac110005-wgs87 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:54:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-kjpn5" for this suite.
Jan  2 18:54:16.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:54:17.005: INFO: namespace: e2e-tests-svcaccounts-kjpn5, resource: bindings, ignored listing per whitelist
Jan  2 18:54:17.177: INFO: namespace e2e-tests-svcaccounts-kjpn5 deletion completed in 8.323116758s

• [SLOW TEST:66.005 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:54:17.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  2 18:54:17.504: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  2 18:54:17.535: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  2 18:54:22.583: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  2 18:54:29.462: INFO: Creating deployment "test-rolling-update-deployment"
Jan  2 18:54:29.488: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  2 18:54:29.519: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  2 18:54:31.542: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  2 18:54:31.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:54:33.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:54:36.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:54:37.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:54:39.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713588069, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  2 18:54:41.568: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  2 18:54:41.595: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-rpflz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rpflz/deployments/test-rolling-update-deployment,UID:4e41c9bc-2d91-11ea-a994-fa163e34d433,ResourceVersion:16948468,Generation:1,CreationTimestamp:2020-01-02 18:54:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-02 18:54:29 +0000 UTC 2020-01-02 18:54:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-02 18:54:39 +0000 UTC 2020-01-02 18:54:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  2 18:54:41.629: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-rpflz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rpflz/replicasets/test-rolling-update-deployment-75db98fb4c,UID:4e5619a1-2d91-11ea-a994-fa163e34d433,ResourceVersion:16948459,Generation:1,CreationTimestamp:2020-01-02 18:54:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e41c9bc-2d91-11ea-a994-fa163e34d433 0xc0016169d7 0xc0016169d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  2 18:54:41.629: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  2 18:54:41.630: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-rpflz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rpflz/replicasets/test-rolling-update-controller,UID:4720d10e-2d91-11ea-a994-fa163e34d433,ResourceVersion:16948467,Generation:2,CreationTimestamp:2020-01-02 18:54:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e41c9bc-2d91-11ea-a994-fa163e34d433 0xc001616917 0xc001616918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  2 18:54:41.641: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wxb7w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wxb7w,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-rpflz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rpflz/pods/test-rolling-update-deployment-75db98fb4c-wxb7w,UID:4e703237-2d91-11ea-a994-fa163e34d433,ResourceVersion:16948458,Generation:0,CreationTimestamp:2020-01-02 18:54:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 4e5619a1-2d91-11ea-a994-fa163e34d433 0xc001617677 0xc001617678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-htncb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-htncb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-htncb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016176e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001617780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:54:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:54:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:54:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 18:54:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-02 18:54:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-02 18:54:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://5dcb1fe879fa182c50c467d2d5e72075b418708149a06649f8e385bcc60f53e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:54:41.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rpflz" for this suite.
Jan  2 18:54:49.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:54:49.823: INFO: namespace: e2e-tests-deployment-rpflz, resource: bindings, ignored listing per whitelist
Jan  2 18:54:49.839: INFO: namespace e2e-tests-deployment-rpflz deletion completed in 8.175111679s

• [SLOW TEST:32.661 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:54:49.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  2 18:54:51.049: INFO: Waiting up to 5m0s for pod "pod-5b190f94-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-emptydir-s7tzb" to be "success or failure"
Jan  2 18:54:51.258: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 209.312664ms
Jan  2 18:54:53.672: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.6231964s
Jan  2 18:54:55.709: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659615172s
Jan  2 18:54:57.730: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.681394077s
Jan  2 18:54:59.788: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739448323s
Jan  2 18:55:01.801: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.752277635s
STEP: Saw pod success
Jan  2 18:55:01.801: INFO: Pod "pod-5b190f94-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:55:01.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5b190f94-2d91-11ea-b611-0242ac110005 container test-container: 
STEP: delete the pod
Jan  2 18:55:02.882: INFO: Waiting for pod pod-5b190f94-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:55:02.904: INFO: Pod pod-5b190f94-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:55:02.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s7tzb" for this suite.
Jan  2 18:55:08.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:55:09.101: INFO: namespace: e2e-tests-emptydir-s7tzb, resource: bindings, ignored listing per whitelist
Jan  2 18:55:09.114: INFO: namespace e2e-tests-emptydir-s7tzb deletion completed in 6.201590422s

• [SLOW TEST:19.275 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:55:09.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:55:19.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-gzl8r" for this suite.
Jan  2 18:56:13.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:56:13.780: INFO: namespace: e2e-tests-kubelet-test-gzl8r, resource: bindings, ignored listing per whitelist
Jan  2 18:56:13.970: INFO: namespace e2e-tests-kubelet-test-gzl8r deletion completed in 54.370027507s

• [SLOW TEST:64.856 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:56:13.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:56:14.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-qvrv7" to be "success or failure"
Jan  2 18:56:14.413: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.165928ms
Jan  2 18:56:16.904: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.522691789s
Jan  2 18:56:18.926: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544457783s
Jan  2 18:56:20.982: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601139206s
Jan  2 18:56:22.996: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.614616338s
Jan  2 18:56:25.023: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642362328s
Jan  2 18:56:27.040: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.658642985s
STEP: Saw pod success
Jan  2 18:56:27.040: INFO: Pod "downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:56:27.044: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:56:27.361: INFO: Waiting for pod downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:56:27.388: INFO: Pod downwardapi-volume-8cc17c58-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:56:27.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qvrv7" for this suite.
Jan  2 18:56:33.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:56:33.601: INFO: namespace: e2e-tests-projected-qvrv7, resource: bindings, ignored listing per whitelist
Jan  2 18:56:33.694: INFO: namespace e2e-tests-projected-qvrv7 deletion completed in 6.297568085s

• [SLOW TEST:19.723 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:56:33.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  2 18:56:34.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-downward-api-rh9dk" to be "success or failure"
Jan  2 18:56:34.175: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.022121ms
Jan  2 18:56:36.451: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303202057s
Jan  2 18:56:38.490: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34161379s
Jan  2 18:56:40.650: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501638473s
Jan  2 18:56:43.155: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.007437819s
Jan  2 18:56:45.176: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.028271467s
STEP: Saw pod success
Jan  2 18:56:45.176: INFO: Pod "downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:56:45.183: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005 container client-container: 
STEP: delete the pod
Jan  2 18:56:46.073: INFO: Waiting for pod downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:56:46.715: INFO: Pod downwardapi-volume-98806590-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:56:46.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rh9dk" for this suite.
Jan  2 18:56:52.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:56:52.921: INFO: namespace: e2e-tests-downward-api-rh9dk, resource: bindings, ignored listing per whitelist
Jan  2 18:56:52.967: INFO: namespace e2e-tests-downward-api-rh9dk deletion completed in 6.230033509s

• [SLOW TEST:19.273 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:56:52.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-a3ee6cc6-2d91-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:56:53.223: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-b9867" to be "success or failure"
Jan  2 18:56:53.232: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962614ms
Jan  2 18:56:55.276: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052368896s
Jan  2 18:56:57.287: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064017034s
Jan  2 18:57:00.009: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.785668268s
Jan  2 18:57:02.045: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.821401425s
Jan  2 18:57:04.089: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.865340205s
STEP: Saw pod success
Jan  2 18:57:04.089: INFO: Pod "pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:57:04.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  2 18:57:04.427: INFO: Waiting for pod pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:57:04.435: INFO: Pod pod-projected-configmaps-a3ef3a35-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:57:04.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9867" for this suite.
Jan  2 18:57:10.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:57:10.794: INFO: namespace: e2e-tests-projected-b9867, resource: bindings, ignored listing per whitelist
Jan  2 18:57:10.816: INFO: namespace e2e-tests-projected-b9867 deletion completed in 6.373579547s

• [SLOW TEST:17.849 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:57:10.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  2 18:57:10.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7ndgx'
Jan  2 18:57:12.816: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  2 18:57:12.817: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan  2 18:57:12.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-7ndgx'
Jan  2 18:57:13.067: INFO: stderr: ""
Jan  2 18:57:13.067: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:57:13.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7ndgx" for this suite.
Jan  2 18:57:37.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:57:37.430: INFO: namespace: e2e-tests-kubectl-7ndgx, resource: bindings, ignored listing per whitelist
Jan  2 18:57:37.499: INFO: namespace e2e-tests-kubectl-7ndgx deletion completed in 24.251604909s

• [SLOW TEST:26.683 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:57:37.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-be6a48ce-2d91-11ea-b611-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  2 18:57:37.704: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-projected-8ls9h" to be "success or failure"
Jan  2 18:57:37.708: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297834ms
Jan  2 18:57:39.862: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15869392s
Jan  2 18:57:41.902: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198268246s
Jan  2 18:57:44.095: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391740169s
Jan  2 18:57:46.185: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.481433566s
Jan  2 18:57:48.302: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.598683475s
Jan  2 18:57:51.182: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.477974769s
STEP: Saw pod success
Jan  2 18:57:51.182: INFO: Pod "pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:57:51.197: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  2 18:57:51.881: INFO: Waiting for pod pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:57:51.900: INFO: Pod pod-projected-secrets-be70f154-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:57:51.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8ls9h" for this suite.
Jan  2 18:57:58.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:57:58.220: INFO: namespace: e2e-tests-projected-8ls9h, resource: bindings, ignored listing per whitelist
Jan  2 18:57:58.281: INFO: namespace e2e-tests-projected-8ls9h deletion completed in 6.370647892s

• [SLOW TEST:20.781 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  2 18:57:58.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-cb07153e-2d91-11ea-b611-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  2 18:57:58.840: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005" in namespace "e2e-tests-configmap-bvqrf" to be "success or failure"
Jan  2 18:57:58.866: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.854638ms
Jan  2 18:58:00.888: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047752949s
Jan  2 18:58:02.934: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093523441s
Jan  2 18:58:04.947: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106803884s
Jan  2 18:58:06.965: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124433032s
Jan  2 18:58:09.017: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177111352s
STEP: Saw pod success
Jan  2 18:58:09.018: INFO: Pod "pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005" satisfied condition "success or failure"
Jan  2 18:58:09.051: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  2 18:58:09.434: INFO: Waiting for pod pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005 to disappear
Jan  2 18:58:09.448: INFO: Pod pod-configmaps-cb08069a-2d91-11ea-b611-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  2 18:58:09.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bvqrf" for this suite.
Jan  2 18:58:15.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  2 18:58:15.718: INFO: namespace: e2e-tests-configmap-bvqrf, resource: bindings, ignored listing per whitelist
Jan  2 18:58:15.718: INFO: namespace e2e-tests-configmap-bvqrf deletion completed in 6.24866038s

• [SLOW TEST:17.436 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSJan  2 18:58:15.719: INFO: Running AfterSuite actions on all nodes
Jan  2 18:58:15.719: INFO: Running AfterSuite actions on node 1
Jan  2 18:58:15.719: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8709.211 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS