I0321 10:46:43.580677 6 e2e.go:224] Starting e2e run "40c6138a-6b61-11ea-946c-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584787603 - Will randomize all specs Will run 201 of 2164 specs Mar 21 10:46:43.773: INFO: >>> kubeConfig: /root/.kube/config Mar 21 10:46:43.776: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 10:46:43.791: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 10:46:43.824: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 10:46:43.824: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 10:46:43.824: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 10:46:43.831: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 10:46:43.831: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 10:46:43.831: INFO: e2e test version: v1.13.12 Mar 21 10:46:43.832: INFO: kube-apiserver version: v1.13.12 SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:46:43.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath Mar 21 10:46:43.959: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-w2qv STEP: Creating a pod to test atomic-volume-subpath Mar 21 10:46:43.984: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w2qv" in namespace "e2e-tests-subpath-qhspd" to be "success or failure" Mar 21 10:46:43.995: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.991909ms Mar 21 10:46:45.999: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014459574s Mar 21 10:46:48.003: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018548039s Mar 21 10:46:50.007: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 6.022694471s Mar 21 10:46:52.011: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 8.026891286s Mar 21 10:46:54.015: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 10.030745325s Mar 21 10:46:56.018: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 12.034046217s Mar 21 10:46:58.022: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 14.038283523s Mar 21 10:47:00.027: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 16.042736817s Mar 21 10:47:02.031: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 18.046914514s Mar 21 10:47:04.034: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 20.050282659s Mar 21 10:47:06.038: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 22.053539577s Mar 21 10:47:08.042: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Running", Reason="", readiness=false. Elapsed: 24.05820953s Mar 21 10:47:10.047: INFO: Pod "pod-subpath-test-configmap-w2qv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.06277481s STEP: Saw pod success Mar 21 10:47:10.047: INFO: Pod "pod-subpath-test-configmap-w2qv" satisfied condition "success or failure" Mar 21 10:47:10.050: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-w2qv container test-container-subpath-configmap-w2qv: STEP: delete the pod Mar 21 10:47:10.074: INFO: Waiting for pod pod-subpath-test-configmap-w2qv to disappear Mar 21 10:47:10.079: INFO: Pod pod-subpath-test-configmap-w2qv no longer exists STEP: Deleting pod pod-subpath-test-configmap-w2qv Mar 21 10:47:10.079: INFO: Deleting pod "pod-subpath-test-configmap-w2qv" in namespace "e2e-tests-subpath-qhspd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:47:10.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qhspd" for this suite. Mar 21 10:47:16.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:47:16.168: INFO: namespace: e2e-tests-subpath-qhspd, resource: bindings, ignored listing per whitelist Mar 21 10:47:16.200: INFO: namespace e2e-tests-subpath-qhspd deletion completed in 6.114847886s • [SLOW TEST:32.368 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:47:16.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 10:47:16.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-cs4lx" to be "success or failure" Mar 21 10:47:16.354: INFO: Pod "downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.112418ms Mar 21 10:47:18.358: INFO: Pod "downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017399414s Mar 21 10:47:20.363: INFO: Pod "downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021932499s STEP: Saw pod success Mar 21 10:47:20.363: INFO: Pod "downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:47:20.366: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 10:47:20.386: INFO: Waiting for pod downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:47:20.390: INFO: Pod downwardapi-volume-5492158f-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:47:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cs4lx" for this suite. Mar 21 10:47:26.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:47:26.457: INFO: namespace: e2e-tests-projected-cs4lx, resource: bindings, ignored listing per whitelist Mar 21 10:47:26.500: INFO: namespace e2e-tests-projected-cs4lx deletion completed in 6.107078021s • [SLOW TEST:10.300 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:47:26.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5aba95d6-6b61-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 10:47:26.649: INFO: Waiting up to 5m0s for pod "pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-8bbv7" to be "success or failure" Mar 21 10:47:26.697: INFO: Pod "pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.058961ms Mar 21 10:47:28.701: INFO: Pod "pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052096708s Mar 21 10:47:30.705: INFO: Pod "pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056260437s STEP: Saw pod success Mar 21 10:47:30.705: INFO: Pod "pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:47:30.708: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 10:47:30.879: INFO: Waiting for pod pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:47:30.922: INFO: Pod pod-secrets-5abb1a6c-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:47:30.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8bbv7" for this suite. Mar 21 10:47:36.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:47:37.028: INFO: namespace: e2e-tests-secrets-8bbv7, resource: bindings, ignored listing per whitelist Mar 21 10:47:37.032: INFO: namespace e2e-tests-secrets-8bbv7 deletion completed in 6.105209403s • [SLOW TEST:10.532 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:47:37.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0321 10:47:38.190056 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 10:47:38.190: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:47:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-24b75" for this suite. Mar 21 10:47:44.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:47:44.292: INFO: namespace: e2e-tests-gc-24b75, resource: bindings, ignored listing per whitelist Mar 21 10:47:44.314: INFO: namespace e2e-tests-gc-24b75 deletion completed in 6.120478367s • [SLOW TEST:7.281 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:47:44.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-6556e305-6b61-11ea-946c-0242ac11000f STEP: Creating secret with name s-test-opt-upd-6556e360-6b61-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6556e305-6b61-11ea-946c-0242ac11000f STEP: Updating secret s-test-opt-upd-6556e360-6b61-11ea-946c-0242ac11000f STEP: Creating secret with name s-test-opt-create-6556e37f-6b61-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:49:21.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-z9b26" for this suite. Mar 21 10:49:43.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:49:43.189: INFO: namespace: e2e-tests-secrets-z9b26, resource: bindings, ignored listing per whitelist Mar 21 10:49:43.241: INFO: namespace e2e-tests-secrets-z9b26 deletion completed in 22.092655994s • [SLOW TEST:118.927 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:49:43.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 10:49:43.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-mclkw" to be "success or failure" Mar 21 10:49:43.332: INFO: Pod "downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012154ms Mar 21 10:49:45.337: INFO: Pod "downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00841998s Mar 21 10:49:47.346: INFO: Pod "downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017639006s STEP: Saw pod success Mar 21 10:49:47.346: INFO: Pod "downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:49:47.348: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 10:49:47.378: INFO: Waiting for pod downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:49:47.387: INFO: Pod downwardapi-volume-ac3183cf-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:49:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mclkw" for this suite. Mar 21 10:49:53.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:49:53.478: INFO: namespace: e2e-tests-downward-api-mclkw, resource: bindings, ignored listing per whitelist Mar 21 10:49:53.485: INFO: namespace e2e-tests-downward-api-mclkw deletion completed in 6.095047133s • [SLOW TEST:10.244 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:49:53.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-b253c673-6b61-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 10:49:53.617: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-2lp77" to be "success or failure" Mar 21 10:49:53.621: INFO: Pod "pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010757ms Mar 21 10:49:55.625: INFO: Pod "pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007864914s Mar 21 10:49:57.629: INFO: Pod "pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011791717s STEP: Saw pod success Mar 21 10:49:57.629: INFO: Pod "pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:49:57.633: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 10:49:57.653: INFO: Waiting for pod pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:49:57.657: INFO: Pod pod-projected-secrets-b2542008-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:49:57.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2lp77" for this suite. Mar 21 10:50:03.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:50:03.759: INFO: namespace: e2e-tests-projected-2lp77, resource: bindings, ignored listing per whitelist Mar 21 10:50:03.779: INFO: namespace e2e-tests-projected-2lp77 deletion completed in 6.118117139s • [SLOW TEST:10.294 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:50:03.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 21 10:50:03.905: INFO: Waiting up to 5m0s for pod "client-containers-b8763f41-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-containers-jcd7k" to be "success or failure" Mar 21 10:50:03.919: INFO: Pod "client-containers-b8763f41-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.136578ms Mar 21 10:50:05.924: INFO: Pod "client-containers-b8763f41-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019419547s Mar 21 10:50:07.929: INFO: Pod "client-containers-b8763f41-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023969263s STEP: Saw pod success Mar 21 10:50:07.929: INFO: Pod "client-containers-b8763f41-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:50:07.932: INFO: Trying to get logs from node hunter-worker pod client-containers-b8763f41-6b61-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 10:50:07.950: INFO: Waiting for pod client-containers-b8763f41-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:50:07.955: INFO: Pod client-containers-b8763f41-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:50:07.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jcd7k" for this suite. Mar 21 10:50:13.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:50:13.999: INFO: namespace: e2e-tests-containers-jcd7k, resource: bindings, ignored listing per whitelist Mar 21 10:50:14.054: INFO: namespace e2e-tests-containers-jcd7k deletion completed in 6.095582151s • [SLOW TEST:10.275 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:50:14.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-be947736-6b61-11ea-946c-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-be9476f7-6b61-11ea-946c-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Mar 21 10:50:14.186: INFO: Waiting up to 5m0s for pod "projected-volume-be947662-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-74pn5" to be "success or failure" Mar 21 10:50:14.196: INFO: Pod "projected-volume-be947662-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.247614ms Mar 21 10:50:16.201: INFO: Pod "projected-volume-be947662-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014784676s Mar 21 10:50:18.205: INFO: Pod "projected-volume-be947662-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019265133s STEP: Saw pod success Mar 21 10:50:18.205: INFO: Pod "projected-volume-be947662-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:50:18.209: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-be947662-6b61-11ea-946c-0242ac11000f container projected-all-volume-test: STEP: delete the pod Mar 21 10:50:18.228: INFO: Waiting for pod projected-volume-be947662-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:50:18.233: INFO: Pod projected-volume-be947662-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:50:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-74pn5" for this suite. Mar 21 10:50:24.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:50:24.382: INFO: namespace: e2e-tests-projected-74pn5, resource: bindings, ignored listing per whitelist Mar 21 10:50:24.403: INFO: namespace e2e-tests-projected-74pn5 deletion completed in 6.167238561s • [SLOW TEST:10.348 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:50:24.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 10:50:24.493: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:50:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zjzgl" for this suite. Mar 21 10:51:06.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:51:06.711: INFO: namespace: e2e-tests-pods-zjzgl, resource: bindings, ignored listing per whitelist Mar 21 10:51:06.731: INFO: namespace e2e-tests-pods-zjzgl deletion completed in 38.098701726s • [SLOW TEST:42.328 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:51:06.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-de03bce3-6b61-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 10:51:06.931: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-p5nwm" to be "success or failure" Mar 21 10:51:06.941: INFO: Pod "pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.823645ms Mar 21 10:51:08.945: INFO: Pod "pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014021611s Mar 21 10:51:10.949: INFO: Pod "pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017809257s STEP: Saw pod success Mar 21 10:51:10.949: INFO: Pod "pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:51:10.952: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 10:51:10.972: INFO: Waiting for pod pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:51:10.977: INFO: Pod pod-projected-configmaps-de054b06-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:51:10.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p5nwm" for this suite. Mar 21 10:51:17.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:51:17.046: INFO: namespace: e2e-tests-projected-p5nwm, resource: bindings, ignored listing per whitelist Mar 21 10:51:17.117: INFO: namespace e2e-tests-projected-p5nwm deletion completed in 6.13414816s • [SLOW TEST:10.386 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:51:17.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 21 10:51:17.222: INFO: Waiting up to 5m0s for pod "pod-e4282b4b-6b61-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-lr2th" to be "success or failure" Mar 21 10:51:17.226: INFO: Pod "pod-e4282b4b-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448591ms Mar 21 10:51:19.252: INFO: Pod "pod-e4282b4b-6b61-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030087794s Mar 21 10:51:21.256: INFO: Pod "pod-e4282b4b-6b61-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033934252s STEP: Saw pod success Mar 21 10:51:21.256: INFO: Pod "pod-e4282b4b-6b61-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:51:21.259: INFO: Trying to get logs from node hunter-worker pod pod-e4282b4b-6b61-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 10:51:21.276: INFO: Waiting for pod pod-e4282b4b-6b61-11ea-946c-0242ac11000f to disappear Mar 21 10:51:21.286: INFO: Pod pod-e4282b4b-6b61-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:51:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lr2th" for this suite. Mar 21 10:51:27.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:51:27.351: INFO: namespace: e2e-tests-emptydir-lr2th, resource: bindings, ignored listing per whitelist Mar 21 10:51:27.368: INFO: namespace e2e-tests-emptydir-lr2th deletion completed in 6.078832241s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:51:27.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-jfnvk Mar 21 10:51:31.490: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-jfnvk STEP: checking the pod's current state and verifying that restartCount is present Mar 21 10:51:31.493: INFO: Initial restart count of pod liveness-exec is 0 Mar 21 10:52:21.628: INFO: Restart count of pod e2e-tests-container-probe-jfnvk/liveness-exec is now 1 (50.134813268s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:52:21.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jfnvk" for this suite. Mar 21 10:52:27.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:52:27.748: INFO: namespace: e2e-tests-container-probe-jfnvk, resource: bindings, ignored listing per whitelist Mar 21 10:52:27.776: INFO: namespace e2e-tests-container-probe-jfnvk deletion completed in 6.101998261s • [SLOW TEST:60.406 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:52:27.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 21 10:52:27.933: INFO: Waiting up to 5m0s for pod "downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-6v7dz" to be "success or failure" Mar 21 10:52:27.948: INFO: Pod "downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.940697ms Mar 21 10:52:29.952: INFO: Pod "downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019107025s Mar 21 10:52:31.957: INFO: Pod "downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023406726s STEP: Saw pod success Mar 21 10:52:31.957: INFO: Pod "downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:52:31.960: INFO: Trying to get logs from node hunter-worker pod downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 10:52:31.980: INFO: Waiting for pod downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f to disappear Mar 21 10:52:31.996: INFO: Pod downward-api-0e4f9157-6b62-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:52:31.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6v7dz" for this suite. Mar 21 10:52:38.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:52:38.082: INFO: namespace: e2e-tests-downward-api-6v7dz, resource: bindings, ignored listing per whitelist Mar 21 10:52:38.148: INFO: namespace e2e-tests-downward-api-6v7dz deletion completed in 6.147739078s • [SLOW TEST:10.372 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:52:38.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 21 10:52:38.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:40.395: INFO: stderr: "" Mar 21 10:52:40.395: INFO: stdout: "pod/pause created\n" Mar 21 10:52:40.395: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 21 10:52:40.395: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-r5bzf" to be "running and ready" Mar 21 10:52:40.425: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.305123ms Mar 21 10:52:42.429: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034369483s Mar 21 10:52:44.432: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.037316558s Mar 21 10:52:44.432: INFO: Pod "pause" satisfied condition "running and ready" Mar 21 10:52:44.432: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 21 10:52:44.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:44.542: INFO: stderr: "" Mar 21 10:52:44.542: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 21 10:52:44.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:44.641: INFO: stderr: "" Mar 21 10:52:44.641: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 21 10:52:44.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:44.738: INFO: stderr: "" Mar 21 10:52:44.738: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 21 10:52:44.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:44.831: INFO: stderr: "" Mar 21 10:52:44.831: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 21 10:52:44.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:44.959: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 10:52:44.959: INFO: stdout: "pod \"pause\" force deleted\n" Mar 21 10:52:44.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-r5bzf' Mar 21 10:52:45.060: INFO: stderr: "No resources found.\n" Mar 21 10:52:45.060: INFO: stdout: "" Mar 21 10:52:45.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-r5bzf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 10:52:45.174: INFO: stderr: "" Mar 21 10:52:45.174: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:52:45.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r5bzf" for this suite. Mar 21 10:52:51.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:52:51.268: INFO: namespace: e2e-tests-kubectl-r5bzf, resource: bindings, ignored listing per whitelist Mar 21 10:52:51.332: INFO: namespace e2e-tests-kubectl-r5bzf deletion completed in 6.153641825s • [SLOW TEST:13.184 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:52:51.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 21 10:52:51.473: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qkfn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-qkfn9/configmaps/e2e-watch-test-watch-closed,UID:1c535128-6b62-11ea-99e8-0242ac110002,ResourceVersion:1006919,Generation:0,CreationTimestamp:2020-03-21 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 10:52:51.473: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qkfn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-qkfn9/configmaps/e2e-watch-test-watch-closed,UID:1c535128-6b62-11ea-99e8-0242ac110002,ResourceVersion:1006920,Generation:0,CreationTimestamp:2020-03-21 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 21 10:52:51.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qkfn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-qkfn9/configmaps/e2e-watch-test-watch-closed,UID:1c535128-6b62-11ea-99e8-0242ac110002,ResourceVersion:1006921,Generation:0,CreationTimestamp:2020-03-21 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 10:52:51.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qkfn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-qkfn9/configmaps/e2e-watch-test-watch-closed,UID:1c535128-6b62-11ea-99e8-0242ac110002,ResourceVersion:1006922,Generation:0,CreationTimestamp:2020-03-21 10:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:52:51.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-qkfn9" for this suite. Mar 21 10:52:57.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:52:57.530: INFO: namespace: e2e-tests-watch-qkfn9, resource: bindings, ignored listing per whitelist Mar 21 10:52:57.574: INFO: namespace e2e-tests-watch-qkfn9 deletion completed in 6.085651196s • [SLOW TEST:6.242 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:52:57.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 21 10:52:57.664: INFO: Waiting up to 5m0s for pod "pod-2006f37a-6b62-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-7hl27" to be "success or failure" Mar 21 10:52:57.668: INFO: Pod "pod-2006f37a-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957679ms Mar 21 10:52:59.672: INFO: Pod "pod-2006f37a-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008007426s Mar 21 10:53:01.676: INFO: Pod "pod-2006f37a-6b62-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012285831s STEP: Saw pod success Mar 21 10:53:01.676: INFO: Pod "pod-2006f37a-6b62-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:53:01.679: INFO: Trying to get logs from node hunter-worker pod pod-2006f37a-6b62-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 10:53:01.716: INFO: Waiting for pod pod-2006f37a-6b62-11ea-946c-0242ac11000f to disappear Mar 21 10:53:01.728: INFO: Pod pod-2006f37a-6b62-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:53:01.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7hl27" for this suite. Mar 21 10:53:07.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:53:07.788: INFO: namespace: e2e-tests-emptydir-7hl27, resource: bindings, ignored listing per whitelist Mar 21 10:53:07.819: INFO: namespace e2e-tests-emptydir-7hl27 deletion completed in 6.087370872s • [SLOW TEST:10.244 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:53:07.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8mpgl Mar 21 10:53:11.975: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8mpgl STEP: checking the pod's current state and verifying that restartCount is present Mar 21 10:53:11.978: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:57:12.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8mpgl" for this suite. Mar 21 10:57:18.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:57:18.868: INFO: namespace: e2e-tests-container-probe-8mpgl, resource: bindings, ignored listing per whitelist Mar 21 10:57:18.922: INFO: namespace e2e-tests-container-probe-8mpgl deletion completed in 6.094941292s • [SLOW TEST:251.103 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:57:18.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 21 10:57:19.040: INFO: Waiting up to 5m0s for pod "client-containers-bbd228da-6b62-11ea-946c-0242ac11000f" in namespace "e2e-tests-containers-blzms" to be "success or failure" Mar 21 10:57:19.045: INFO: Pod "client-containers-bbd228da-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609638ms Mar 21 10:57:21.049: INFO: Pod "client-containers-bbd228da-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008543593s Mar 21 10:57:23.053: INFO: Pod "client-containers-bbd228da-6b62-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012873177s STEP: Saw pod success Mar 21 10:57:23.053: INFO: Pod "client-containers-bbd228da-6b62-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:57:23.056: INFO: Trying to get logs from node hunter-worker pod client-containers-bbd228da-6b62-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 10:57:23.077: INFO: Waiting for pod client-containers-bbd228da-6b62-11ea-946c-0242ac11000f to disappear Mar 21 10:57:23.081: INFO: Pod client-containers-bbd228da-6b62-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:57:23.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-blzms" for this suite. Mar 21 10:57:29.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:57:29.121: INFO: namespace: e2e-tests-containers-blzms, resource: bindings, ignored listing per whitelist Mar 21 10:57:29.178: INFO: namespace e2e-tests-containers-blzms deletion completed in 6.093432643s • [SLOW TEST:10.256 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:57:29.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 21 10:57:33.823: INFO: Successfully updated pod "annotationupdatec1ee8a15-6b62-11ea-946c-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:57:35.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rh2zg" for this suite. Mar 21 10:57:57.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:57:57.907: INFO: namespace: e2e-tests-downward-api-rh2zg, resource: bindings, ignored listing per whitelist Mar 21 10:57:57.969: INFO: namespace e2e-tests-downward-api-rh2zg deletion completed in 22.096425193s • [SLOW TEST:28.791 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:57:57.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d3159899-6b62-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 10:57:58.085: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-74wcj" to be "success or failure" Mar 21 10:57:58.144: INFO: Pod "pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 59.227456ms Mar 21 10:58:00.148: INFO: Pod "pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062893775s Mar 21 10:58:02.154: INFO: Pod "pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068808407s STEP: Saw pod success Mar 21 10:58:02.154: INFO: Pod "pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 10:58:02.157: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 10:58:02.203: INFO: Waiting for pod pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f to disappear Mar 21 10:58:02.214: INFO: Pod pod-projected-configmaps-d3173972-6b62-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:58:02.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-74wcj" for this suite. Mar 21 10:58:08.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 10:58:08.311: INFO: namespace: e2e-tests-projected-74wcj, resource: bindings, ignored listing per whitelist Mar 21 10:58:08.325: INFO: namespace e2e-tests-projected-74wcj deletion completed in 6.105861604s • [SLOW TEST:10.355 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 10:58:08.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-r8dpv [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 21 10:58:08.454: INFO: Found 0 stateful pods, waiting for 3 Mar 21 10:58:18.459: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 10:58:18.459: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 10:58:18.459: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 21 10:58:18.491: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 21 10:58:28.568: INFO: Updating stateful set ss2 Mar 21 10:58:28.578: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 10:58:38.586: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 21 10:58:48.762: INFO: Found 2 stateful pods, waiting for 3 Mar 21 10:58:58.767: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 10:58:58.767: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 10:58:58.767: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 21 10:58:58.793: INFO: Updating stateful set ss2 Mar 21 10:58:58.813: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 10:59:08.821: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 10:59:18.839: INFO: Updating stateful set ss2 Mar 21 10:59:18.872: INFO: Waiting for StatefulSet e2e-tests-statefulset-r8dpv/ss2 to complete update Mar 21 10:59:18.872: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 10:59:28.880: INFO: Waiting for StatefulSet e2e-tests-statefulset-r8dpv/ss2 to complete update Mar 21 10:59:28.880: INFO: Waiting for Pod e2e-tests-statefulset-r8dpv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 21 10:59:38.880: INFO: Deleting all statefulset in ns e2e-tests-statefulset-r8dpv Mar 21 10:59:38.884: INFO: Scaling statefulset ss2 to 0 Mar 21 10:59:58.924: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 10:59:58.927: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 10:59:58.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-r8dpv" for this suite. Mar 21 11:00:04.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:00:05.076: INFO: namespace: e2e-tests-statefulset-r8dpv, resource: bindings, ignored listing per whitelist Mar 21 11:00:05.076: INFO: namespace e2e-tests-statefulset-r8dpv deletion completed in 6.134784818s • [SLOW TEST:116.751 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:00:05.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 21 11:00:05.194: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:00:12.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-br9jh" for this suite. Mar 21 11:00:18.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:00:18.942: INFO: namespace: e2e-tests-init-container-br9jh, resource: bindings, ignored listing per whitelist Mar 21 11:00:18.971: INFO: namespace e2e-tests-init-container-br9jh deletion completed in 6.100990041s • [SLOW TEST:13.895 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:00:18.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 21 11:00:19.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-7mw8d run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 21 11:00:21.997: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0321 11:00:21.914159 224 log.go:172] (0xc0006e02c0) (0xc000882140) Create stream\nI0321 11:00:21.914221 224 log.go:172] (0xc0006e02c0) (0xc000882140) Stream added, broadcasting: 1\nI0321 11:00:21.916874 224 log.go:172] (0xc0006e02c0) Reply frame received for 1\nI0321 11:00:21.916896 224 log.go:172] (0xc0006e02c0) (0xc0000f1900) Create stream\nI0321 11:00:21.916903 224 log.go:172] (0xc0006e02c0) (0xc0000f1900) Stream added, broadcasting: 3\nI0321 11:00:21.918054 224 log.go:172] (0xc0006e02c0) Reply frame received for 3\nI0321 11:00:21.918109 224 log.go:172] (0xc0006e02c0) (0xc0008821e0) Create stream\nI0321 11:00:21.918127 224 log.go:172] (0xc0006e02c0) (0xc0008821e0) Stream added, broadcasting: 5\nI0321 11:00:21.918992 224 log.go:172] (0xc0006e02c0) Reply frame received for 5\nI0321 11:00:21.919009 224 log.go:172] (0xc0006e02c0) (0xc000882280) Create stream\nI0321 11:00:21.919014 224 log.go:172] (0xc0006e02c0) (0xc000882280) Stream added, broadcasting: 7\nI0321 11:00:21.920221 224 log.go:172] (0xc0006e02c0) Reply frame received for 7\nI0321 11:00:21.920516 224 log.go:172] (0xc0000f1900) (3) Writing data frame\nI0321 11:00:21.920664 224 log.go:172] (0xc0000f1900) (3) Writing data frame\nI0321 11:00:21.921997 224 log.go:172] (0xc0006e02c0) Data frame received for 5\nI0321 11:00:21.922027 224 log.go:172] (0xc0008821e0) (5) Data frame handling\nI0321 11:00:21.922055 224 log.go:172] (0xc0008821e0) (5) Data frame sent\nI0321 11:00:21.922657 224 log.go:172] (0xc0006e02c0) Data frame received for 5\nI0321 11:00:21.922668 224 log.go:172] (0xc0008821e0) (5) Data frame handling\nI0321 11:00:21.922673 224 log.go:172] (0xc0008821e0) (5) Data frame sent\nI0321 11:00:21.971841 224 log.go:172] (0xc0006e02c0) Data frame received for 7\nI0321 11:00:21.971861 224 log.go:172] (0xc000882280) (7) Data frame handling\nI0321 11:00:21.971876 224 log.go:172] (0xc0006e02c0) Data frame received for 5\nI0321 11:00:21.971896 224 log.go:172] (0xc0008821e0) (5) Data frame handling\nI0321 11:00:21.972474 224 log.go:172] (0xc0006e02c0) Data frame received for 1\nI0321 11:00:21.972529 224 log.go:172] (0xc0006e02c0) (0xc0000f1900) Stream removed, broadcasting: 3\nI0321 11:00:21.972571 224 log.go:172] (0xc000882140) (1) Data frame handling\nI0321 11:00:21.972789 224 log.go:172] (0xc000882140) (1) Data frame sent\nI0321 11:00:21.972809 224 log.go:172] (0xc0006e02c0) (0xc000882140) Stream removed, broadcasting: 1\nI0321 11:00:21.972880 224 log.go:172] (0xc0006e02c0) (0xc000882140) Stream removed, broadcasting: 1\nI0321 11:00:21.972902 224 log.go:172] (0xc0006e02c0) (0xc0000f1900) Stream removed, broadcasting: 3\nI0321 11:00:21.972913 224 log.go:172] (0xc0006e02c0) (0xc0008821e0) Stream removed, broadcasting: 5\nI0321 11:00:21.972969 224 log.go:172] (0xc0006e02c0) Go away received\nI0321 11:00:21.973297 224 log.go:172] (0xc0006e02c0) (0xc000882280) Stream removed, broadcasting: 7\n" Mar 21 11:00:21.997: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:00:24.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7mw8d" for this suite. Mar 21 11:00:30.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:00:30.062: INFO: namespace: e2e-tests-kubectl-7mw8d, resource: bindings, ignored listing per whitelist Mar 21 11:00:30.129: INFO: namespace e2e-tests-kubectl-7mw8d deletion completed in 6.120932986s • [SLOW TEST:11.158 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:00:30.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:00:30.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-tcm52" to be "success or failure" Mar 21 11:00:30.236: INFO: Pod "downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443522ms Mar 21 11:00:32.240: INFO: Pod "downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007844625s Mar 21 11:00:34.245: INFO: Pod "downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526653s STEP: Saw pod success Mar 21 11:00:34.245: INFO: Pod "downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:00:34.248: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:00:34.309: INFO: Waiting for pod downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f to disappear Mar 21 11:00:34.314: INFO: Pod downwardapi-volume-2dc6f0b9-6b63-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:00:34.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tcm52" for this suite. Mar 21 11:00:40.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:00:40.384: INFO: namespace: e2e-tests-projected-tcm52, resource: bindings, ignored listing per whitelist Mar 21 11:00:40.412: INFO: namespace e2e-tests-projected-tcm52 deletion completed in 6.094378219s • [SLOW TEST:10.283 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:00:40.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 21 11:00:40.500: INFO: PodSpec: initContainers in spec.initContainers Mar 21 11:01:25.994: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-33e808ef-6b63-11ea-946c-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-vfvhl", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-vfvhl/pods/pod-init-33e808ef-6b63-11ea-946c-0242ac11000f", UID:"33e9c9ae-6b63-11ea-99e8-0242ac110002", ResourceVersion:"1008416", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720385240, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"500052124"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4qdpp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d3ba40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdpp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016f2288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00186b680), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0016f23a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0016f23c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0016f23c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0016f23cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720385240, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720385240, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720385240, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720385240, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.36", StartTime:(*v1.Time)(0xc001c1c880), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00124a2a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00124a310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2b40893b24e351d29b2fdcfea1bd0ddc5e634812125ba169f0a6d82497f7005f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c1c8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c1c8a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:01:25.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vfvhl" for this suite. Mar 21 11:01:48.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:01:48.198: INFO: namespace: e2e-tests-init-container-vfvhl, resource: bindings, ignored listing per whitelist Mar 21 11:01:48.219: INFO: namespace e2e-tests-init-container-vfvhl deletion completed in 22.174863003s • [SLOW TEST:67.807 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:01:48.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 21 11:01:52.857: INFO: Successfully updated pod "labelsupdate5c53bfb0-6b63-11ea-946c-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:01:54.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f57mw" for this suite. Mar 21 11:02:16.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:02:16.997: INFO: namespace: e2e-tests-downward-api-f57mw, resource: bindings, ignored listing per whitelist Mar 21 11:02:17.007: INFO: namespace e2e-tests-downward-api-f57mw deletion completed in 22.08858597s • [SLOW TEST:28.788 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:02:17.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9wqcw [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 21 11:02:17.149: INFO: Found 0 stateful pods, waiting for 3 Mar 21 11:02:27.155: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 11:02:27.155: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 11:02:27.155: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 21 11:02:27.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wqcw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:02:27.397: INFO: stderr: "I0321 11:02:27.277981 250 log.go:172] (0xc00081c2c0) (0xc00070c640) Create stream\nI0321 11:02:27.278048 250 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream added, broadcasting: 1\nI0321 11:02:27.281012 250 log.go:172] (0xc00081c2c0) Reply frame received for 1\nI0321 11:02:27.281058 250 log.go:172] (0xc00081c2c0) (0xc0005b4c80) Create stream\nI0321 11:02:27.281070 250 log.go:172] (0xc00081c2c0) (0xc0005b4c80) Stream added, broadcasting: 3\nI0321 11:02:27.282188 250 log.go:172] (0xc00081c2c0) Reply frame received for 3\nI0321 11:02:27.282259 250 log.go:172] (0xc00081c2c0) (0xc000670000) Create stream\nI0321 11:02:27.282287 250 log.go:172] (0xc00081c2c0) (0xc000670000) Stream added, broadcasting: 5\nI0321 11:02:27.283289 250 log.go:172] (0xc00081c2c0) Reply frame received for 5\nI0321 11:02:27.390460 250 log.go:172] (0xc00081c2c0) Data frame received for 3\nI0321 11:02:27.390501 250 log.go:172] (0xc0005b4c80) (3) Data frame handling\nI0321 11:02:27.390532 250 log.go:172] (0xc0005b4c80) (3) Data frame sent\nI0321 11:02:27.390545 250 log.go:172] (0xc00081c2c0) Data frame received for 3\nI0321 11:02:27.390555 250 log.go:172] (0xc0005b4c80) (3) Data frame handling\nI0321 11:02:27.390613 250 log.go:172] (0xc00081c2c0) Data frame received for 5\nI0321 11:02:27.390644 250 log.go:172] (0xc000670000) (5) Data frame handling\nI0321 11:02:27.392601 250 log.go:172] (0xc00081c2c0) Data frame received for 1\nI0321 11:02:27.392639 250 log.go:172] (0xc00070c640) (1) Data frame handling\nI0321 11:02:27.392654 250 log.go:172] (0xc00070c640) (1) Data frame sent\nI0321 11:02:27.392672 250 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0321 11:02:27.392730 250 log.go:172] (0xc00081c2c0) Go away received\nI0321 11:02:27.392922 250 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0321 11:02:27.392950 250 log.go:172] (0xc00081c2c0) (0xc0005b4c80) Stream removed, broadcasting: 3\nI0321 11:02:27.392966 250 log.go:172] (0xc00081c2c0) (0xc000670000) Stream removed, broadcasting: 5\n" Mar 21 11:02:27.397: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:02:27.397: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 21 11:02:37.430: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 21 11:02:47.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wqcw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:02:47.662: INFO: stderr: "I0321 11:02:47.596289 272 log.go:172] (0xc000138580) (0xc0003275e0) Create stream\nI0321 11:02:47.596341 272 log.go:172] (0xc000138580) (0xc0003275e0) Stream added, broadcasting: 1\nI0321 11:02:47.598964 272 log.go:172] (0xc000138580) Reply frame received for 1\nI0321 11:02:47.599029 272 log.go:172] (0xc000138580) (0xc00062c000) Create stream\nI0321 11:02:47.599047 272 log.go:172] (0xc000138580) (0xc00062c000) Stream added, broadcasting: 3\nI0321 11:02:47.599885 272 log.go:172] (0xc000138580) Reply frame received for 3\nI0321 11:02:47.599920 272 log.go:172] (0xc000138580) (0xc000116000) Create stream\nI0321 11:02:47.599929 272 log.go:172] (0xc000138580) (0xc000116000) Stream added, broadcasting: 5\nI0321 11:02:47.600638 272 log.go:172] (0xc000138580) Reply frame received for 5\nI0321 11:02:47.656088 272 log.go:172] (0xc000138580) Data frame received for 5\nI0321 11:02:47.656235 272 log.go:172] (0xc000116000) (5) Data frame handling\nI0321 11:02:47.656265 272 log.go:172] (0xc000138580) Data frame received for 3\nI0321 11:02:47.656295 272 log.go:172] (0xc00062c000) (3) Data frame handling\nI0321 11:02:47.656311 272 log.go:172] (0xc00062c000) (3) Data frame sent\nI0321 11:02:47.656324 272 log.go:172] (0xc000138580) Data frame received for 3\nI0321 11:02:47.656335 272 log.go:172] (0xc00062c000) (3) Data frame handling\nI0321 11:02:47.657873 272 log.go:172] (0xc000138580) Data frame received for 1\nI0321 11:02:47.657913 272 log.go:172] (0xc0003275e0) (1) Data frame handling\nI0321 11:02:47.657956 272 log.go:172] (0xc0003275e0) (1) Data frame sent\nI0321 11:02:47.657979 272 log.go:172] (0xc000138580) (0xc0003275e0) Stream removed, broadcasting: 1\nI0321 11:02:47.658054 272 log.go:172] (0xc000138580) Go away received\nI0321 11:02:47.658303 272 log.go:172] (0xc000138580) (0xc0003275e0) Stream removed, broadcasting: 1\nI0321 11:02:47.658326 272 log.go:172] (0xc000138580) (0xc00062c000) Stream removed, broadcasting: 3\nI0321 11:02:47.658360 272 log.go:172] (0xc000138580) (0xc000116000) Stream removed, broadcasting: 5\n" Mar 21 11:02:47.662: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:02:47.662: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:02:57.708: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:02:57.709: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 11:02:57.709: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 11:02:57.709: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 11:03:07.732: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:03:07.732: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 11:03:07.732: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 21 11:03:17.716: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:03:17.716: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 21 11:03:27.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wqcw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:03:27.970: INFO: stderr: "I0321 11:03:27.866824 294 log.go:172] (0xc00070e370) (0xc000667400) Create stream\nI0321 11:03:27.866892 294 log.go:172] (0xc00070e370) (0xc000667400) Stream added, broadcasting: 1\nI0321 11:03:27.869331 294 log.go:172] (0xc00070e370) Reply frame received for 1\nI0321 11:03:27.869379 294 log.go:172] (0xc00070e370) (0xc0006674a0) Create stream\nI0321 11:03:27.869394 294 log.go:172] (0xc00070e370) (0xc0006674a0) Stream added, broadcasting: 3\nI0321 11:03:27.870188 294 log.go:172] (0xc00070e370) Reply frame received for 3\nI0321 11:03:27.870241 294 log.go:172] (0xc00070e370) (0xc000704000) Create stream\nI0321 11:03:27.870265 294 log.go:172] (0xc00070e370) (0xc000704000) Stream added, broadcasting: 5\nI0321 11:03:27.871050 294 log.go:172] (0xc00070e370) Reply frame received for 5\nI0321 11:03:27.966038 294 log.go:172] (0xc00070e370) Data frame received for 5\nI0321 11:03:27.966072 294 log.go:172] (0xc000704000) (5) Data frame handling\nI0321 11:03:27.966089 294 log.go:172] (0xc00070e370) Data frame received for 3\nI0321 11:03:27.966094 294 log.go:172] (0xc0006674a0) (3) Data frame handling\nI0321 11:03:27.966100 294 log.go:172] (0xc0006674a0) (3) Data frame sent\nI0321 11:03:27.966106 294 log.go:172] (0xc00070e370) Data frame received for 3\nI0321 11:03:27.966113 294 log.go:172] (0xc0006674a0) (3) Data frame handling\nI0321 11:03:27.967465 294 log.go:172] (0xc00070e370) Data frame received for 1\nI0321 11:03:27.967486 294 log.go:172] (0xc000667400) (1) Data frame handling\nI0321 11:03:27.967493 294 log.go:172] (0xc000667400) (1) Data frame sent\nI0321 11:03:27.967509 294 log.go:172] (0xc00070e370) (0xc000667400) Stream removed, broadcasting: 1\nI0321 11:03:27.967538 294 log.go:172] (0xc00070e370) Go away received\nI0321 11:03:27.967724 294 log.go:172] (0xc00070e370) (0xc000667400) Stream removed, broadcasting: 1\nI0321 11:03:27.967737 294 log.go:172] (0xc00070e370) (0xc0006674a0) Stream removed, broadcasting: 3\nI0321 11:03:27.967742 294 log.go:172] (0xc00070e370) (0xc000704000) Stream removed, broadcasting: 5\n" Mar 21 11:03:27.970: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:03:27.970: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 11:03:38.002: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 21 11:03:48.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9wqcw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:03:48.210: INFO: stderr: "I0321 11:03:48.144608 317 log.go:172] (0xc0006fe420) (0xc00001d400) Create stream\nI0321 11:03:48.144678 317 log.go:172] (0xc0006fe420) (0xc00001d400) Stream added, broadcasting: 1\nI0321 11:03:48.148020 317 log.go:172] (0xc0006fe420) Reply frame received for 1\nI0321 11:03:48.148082 317 log.go:172] (0xc0006fe420) (0xc000434000) Create stream\nI0321 11:03:48.148098 317 log.go:172] (0xc0006fe420) (0xc000434000) Stream added, broadcasting: 3\nI0321 11:03:48.149068 317 log.go:172] (0xc0006fe420) Reply frame received for 3\nI0321 11:03:48.149244 317 log.go:172] (0xc0006fe420) (0xc0004340a0) Create stream\nI0321 11:03:48.149268 317 log.go:172] (0xc0006fe420) (0xc0004340a0) Stream added, broadcasting: 5\nI0321 11:03:48.150178 317 log.go:172] (0xc0006fe420) Reply frame received for 5\nI0321 11:03:48.203971 317 log.go:172] (0xc0006fe420) Data frame received for 5\nI0321 11:03:48.203997 317 log.go:172] (0xc0004340a0) (5) Data frame handling\nI0321 11:03:48.204038 317 log.go:172] (0xc0006fe420) Data frame received for 3\nI0321 11:03:48.204068 317 log.go:172] (0xc000434000) (3) Data frame handling\nI0321 11:03:48.204100 317 log.go:172] (0xc000434000) (3) Data frame sent\nI0321 11:03:48.204120 317 log.go:172] (0xc0006fe420) Data frame received for 3\nI0321 11:03:48.204130 317 log.go:172] (0xc000434000) (3) Data frame handling\nI0321 11:03:48.206181 317 log.go:172] (0xc0006fe420) Data frame received for 1\nI0321 11:03:48.206200 317 log.go:172] (0xc00001d400) (1) Data frame handling\nI0321 11:03:48.206219 317 log.go:172] (0xc00001d400) (1) Data frame sent\nI0321 11:03:48.206237 317 log.go:172] (0xc0006fe420) (0xc00001d400) Stream removed, broadcasting: 1\nI0321 11:03:48.206253 317 log.go:172] (0xc0006fe420) Go away received\nI0321 11:03:48.206460 317 log.go:172] (0xc0006fe420) (0xc00001d400) Stream removed, broadcasting: 1\nI0321 11:03:48.206493 317 log.go:172] (0xc0006fe420) (0xc000434000) Stream removed, broadcasting: 3\nI0321 11:03:48.206511 317 log.go:172] (0xc0006fe420) (0xc0004340a0) Stream removed, broadcasting: 5\n" Mar 21 11:03:48.210: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:03:48.210: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:03:58.227: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:03:58.227: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 21 11:03:58.227: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 21 11:03:58.227: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 21 11:04:08.237: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:04:08.237: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 21 11:04:08.237: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 21 11:04:18.239: INFO: Waiting for StatefulSet e2e-tests-statefulset-9wqcw/ss2 to complete update Mar 21 11:04:18.239: INFO: Waiting for Pod e2e-tests-statefulset-9wqcw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 21 11:04:28.236: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9wqcw Mar 21 11:04:28.238: INFO: Scaling statefulset ss2 to 0 Mar 21 11:04:58.257: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 11:04:58.260: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:04:58.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9wqcw" for this suite. Mar 21 11:05:04.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:05:04.369: INFO: namespace: e2e-tests-statefulset-9wqcw, resource: bindings, ignored listing per whitelist Mar 21 11:05:04.422: INFO: namespace e2e-tests-statefulset-9wqcw deletion completed in 6.14335865s • [SLOW TEST:167.416 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:05:04.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:05:04.501: INFO: Creating ReplicaSet my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f Mar 21 11:05:04.516: INFO: Pod name my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f: Found 0 pods out of 1 Mar 21 11:05:09.521: INFO: Pod name my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f: Found 1 pods out of 1 Mar 21 11:05:09.521: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f" is running Mar 21 11:05:09.524: INFO: Pod "my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f-x94k7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:05:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:05:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:05:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:05:04 +0000 UTC Reason: Message:}]) Mar 21 11:05:09.524: INFO: Trying to dial the pod Mar 21 11:05:14.537: INFO: Controller my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f-x94k7]: "my-hostname-basic-d1437e1d-6b63-11ea-946c-0242ac11000f-x94k7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:05:14.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-jwx59" for this suite. Mar 21 11:05:20.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:05:20.575: INFO: namespace: e2e-tests-replicaset-jwx59, resource: bindings, ignored listing per whitelist Mar 21 11:05:20.633: INFO: namespace e2e-tests-replicaset-jwx59 deletion completed in 6.092413232s • [SLOW TEST:16.211 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:05:20.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 21 11:05:20.748: INFO: Waiting up to 5m0s for pod "pod-daf0e274-6b63-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-v462k" to be "success or failure" Mar 21 11:05:20.752: INFO: Pod "pod-daf0e274-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.420168ms Mar 21 11:05:22.756: INFO: Pod "pod-daf0e274-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007619771s Mar 21 11:05:24.760: INFO: Pod "pod-daf0e274-6b63-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011908149s STEP: Saw pod success Mar 21 11:05:24.760: INFO: Pod "pod-daf0e274-6b63-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:05:24.763: INFO: Trying to get logs from node hunter-worker pod pod-daf0e274-6b63-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 11:05:24.790: INFO: Waiting for pod pod-daf0e274-6b63-11ea-946c-0242ac11000f to disappear Mar 21 11:05:24.800: INFO: Pod pod-daf0e274-6b63-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:05:24.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v462k" for this suite. Mar 21 11:05:30.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:05:30.840: INFO: namespace: e2e-tests-emptydir-v462k, resource: bindings, ignored listing per whitelist Mar 21 11:05:30.967: INFO: namespace e2e-tests-emptydir-v462k deletion completed in 6.163806429s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:05:30.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e114ce8d-6b63-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:05:31.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-jj8th" to be "success or failure" Mar 21 11:05:31.098: INFO: Pod "pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510941ms Mar 21 11:05:33.102: INFO: Pod "pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006344606s Mar 21 11:05:35.106: INFO: Pod "pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010776951s STEP: Saw pod success Mar 21 11:05:35.106: INFO: Pod "pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:05:35.110: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 11:05:35.170: INFO: Waiting for pod pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f to disappear Mar 21 11:05:35.175: INFO: Pod pod-configmaps-e11c9d11-6b63-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:05:35.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jj8th" for this suite. Mar 21 11:05:41.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:05:41.210: INFO: namespace: e2e-tests-configmap-jj8th, resource: bindings, ignored listing per whitelist Mar 21 11:05:41.267: INFO: namespace e2e-tests-configmap-jj8th deletion completed in 6.088762122s • [SLOW TEST:10.300 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:05:41.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 21 11:05:45.495: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:06:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-h6dct" for this suite. Mar 21 11:06:15.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:06:15.620: INFO: namespace: e2e-tests-namespaces-h6dct, resource: bindings, ignored listing per whitelist Mar 21 11:06:15.681: INFO: namespace e2e-tests-namespaces-h6dct deletion completed in 6.109248663s STEP: Destroying namespace "e2e-tests-nsdeletetest-v94jx" for this suite. Mar 21 11:06:15.683: INFO: Namespace e2e-tests-nsdeletetest-v94jx was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-6tn8c" for this suite. Mar 21 11:06:21.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:06:21.763: INFO: namespace: e2e-tests-nsdeletetest-6tn8c, resource: bindings, ignored listing per whitelist Mar 21 11:06:21.781: INFO: namespace e2e-tests-nsdeletetest-6tn8c deletion completed in 6.097895536s • [SLOW TEST:40.513 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:06:21.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ff65f888-6b63-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:06:21.920: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-b5kgk" to be "success or failure" Mar 21 11:06:21.937: INFO: Pod "pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.235812ms Mar 21 11:06:23.941: INFO: Pod "pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020868175s Mar 21 11:06:25.945: INFO: Pod "pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025169421s STEP: Saw pod success Mar 21 11:06:25.945: INFO: Pod "pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:06:25.948: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 21 11:06:26.002: INFO: Waiting for pod pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f to disappear Mar 21 11:06:26.029: INFO: Pod pod-projected-secrets-ff6792ef-6b63-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:06:26.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b5kgk" for this suite. Mar 21 11:06:32.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:06:32.093: INFO: namespace: e2e-tests-projected-b5kgk, resource: bindings, ignored listing per whitelist Mar 21 11:06:32.140: INFO: namespace e2e-tests-projected-b5kgk deletion completed in 6.107599081s • [SLOW TEST:10.358 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:06:32.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-058f7e76-6b64-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:06:32.255: INFO: Waiting up to 5m0s for pod "pod-secrets-05902416-6b64-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-nwrwx" to be "success or failure" Mar 21 11:06:32.257: INFO: Pod "pod-secrets-05902416-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92017ms Mar 21 11:06:34.296: INFO: Pod "pod-secrets-05902416-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041120354s Mar 21 11:06:36.300: INFO: Pod "pod-secrets-05902416-6b64-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04553681s STEP: Saw pod success Mar 21 11:06:36.300: INFO: Pod "pod-secrets-05902416-6b64-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:06:36.303: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-05902416-6b64-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 11:06:36.368: INFO: Waiting for pod pod-secrets-05902416-6b64-11ea-946c-0242ac11000f to disappear Mar 21 11:06:36.374: INFO: Pod pod-secrets-05902416-6b64-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:06:36.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nwrwx" for this suite. Mar 21 11:06:42.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:06:42.425: INFO: namespace: e2e-tests-secrets-nwrwx, resource: bindings, ignored listing per whitelist Mar 21 11:06:42.470: INFO: namespace e2e-tests-secrets-nwrwx deletion completed in 6.092806571s • [SLOW TEST:10.330 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:06:42.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0321 11:06:54.548663 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 11:06:54.548: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:06:54.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-69n7n" for this suite. Mar 21 11:07:02.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:07:02.811: INFO: namespace: e2e-tests-gc-69n7n, resource: bindings, ignored listing per whitelist Mar 21 11:07:02.866: INFO: namespace e2e-tests-gc-69n7n deletion completed in 8.150437614s • [SLOW TEST:20.396 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:07:02.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 21 11:07:02.997: INFO: Waiting up to 5m0s for pod "client-containers-17e39800-6b64-11ea-946c-0242ac11000f" in namespace "e2e-tests-containers-g97mb" to be "success or failure" Mar 21 11:07:03.001: INFO: Pod "client-containers-17e39800-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056716ms Mar 21 11:07:05.014: INFO: Pod "client-containers-17e39800-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017018044s Mar 21 11:07:07.018: INFO: Pod "client-containers-17e39800-6b64-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021680098s STEP: Saw pod success Mar 21 11:07:07.019: INFO: Pod "client-containers-17e39800-6b64-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:07:07.022: INFO: Trying to get logs from node hunter-worker2 pod client-containers-17e39800-6b64-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 11:07:07.044: INFO: Waiting for pod client-containers-17e39800-6b64-11ea-946c-0242ac11000f to disappear Mar 21 11:07:07.086: INFO: Pod client-containers-17e39800-6b64-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:07:07.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-g97mb" for this suite. Mar 21 11:07:13.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:07:13.148: INFO: namespace: e2e-tests-containers-g97mb, resource: bindings, ignored listing per whitelist Mar 21 11:07:13.186: INFO: namespace e2e-tests-containers-g97mb deletion completed in 6.095554882s • [SLOW TEST:10.319 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:07:13.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9825n STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 11:07:13.275: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 11:07:37.400: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.51 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9825n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 11:07:37.400: INFO: >>> kubeConfig: /root/.kube/config I0321 11:07:37.435034 6 log.go:172] (0xc0015ac420) (0xc001ff1ea0) Create stream I0321 11:07:37.435061 6 log.go:172] (0xc0015ac420) (0xc001ff1ea0) Stream added, broadcasting: 1 I0321 11:07:37.441291 6 log.go:172] (0xc0015ac420) Reply frame received for 1 I0321 11:07:37.441346 6 log.go:172] (0xc0015ac420) (0xc000b7c000) Create stream I0321 11:07:37.441361 6 log.go:172] (0xc0015ac420) (0xc000b7c000) Stream added, broadcasting: 3 I0321 11:07:37.442359 6 log.go:172] (0xc0015ac420) Reply frame received for 3 I0321 11:07:37.442399 6 log.go:172] (0xc0015ac420) (0xc001925400) Create stream I0321 11:07:37.442411 6 log.go:172] (0xc0015ac420) (0xc001925400) Stream added, broadcasting: 5 I0321 11:07:37.443252 6 log.go:172] (0xc0015ac420) Reply frame received for 5 I0321 11:07:38.501555 6 log.go:172] (0xc0015ac420) Data frame received for 5 I0321 11:07:38.501609 6 log.go:172] (0xc001925400) (5) Data frame handling I0321 11:07:38.501642 6 log.go:172] (0xc0015ac420) Data frame received for 3 I0321 11:07:38.501683 6 log.go:172] (0xc000b7c000) (3) Data frame handling I0321 11:07:38.501712 6 log.go:172] (0xc000b7c000) (3) Data frame sent I0321 11:07:38.501941 6 log.go:172] (0xc0015ac420) Data frame received for 3 I0321 11:07:38.501976 6 log.go:172] (0xc000b7c000) (3) Data frame handling I0321 11:07:38.504814 6 log.go:172] (0xc0015ac420) Data frame received for 1 I0321 11:07:38.504855 6 log.go:172] (0xc001ff1ea0) (1) Data frame handling I0321 11:07:38.504886 6 log.go:172] (0xc001ff1ea0) (1) Data frame sent I0321 11:07:38.504908 6 log.go:172] (0xc0015ac420) (0xc001ff1ea0) Stream removed, broadcasting: 1 I0321 11:07:38.504927 6 log.go:172] (0xc0015ac420) Go away received I0321 11:07:38.505449 6 log.go:172] (0xc0015ac420) (0xc001ff1ea0) Stream removed, broadcasting: 1 I0321 11:07:38.505474 6 log.go:172] (0xc0015ac420) (0xc000b7c000) Stream removed, broadcasting: 3 I0321 11:07:38.505486 6 log.go:172] (0xc0015ac420) (0xc001925400) Stream removed, broadcasting: 5 Mar 21 11:07:38.505: INFO: Found all expected endpoints: [netserver-0] Mar 21 11:07:38.509: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.186 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9825n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 11:07:38.509: INFO: >>> kubeConfig: /root/.kube/config I0321 11:07:38.535242 6 log.go:172] (0xc0009f13f0) (0xc0019255e0) Create stream I0321 11:07:38.535276 6 log.go:172] (0xc0009f13f0) (0xc0019255e0) Stream added, broadcasting: 1 I0321 11:07:38.537573 6 log.go:172] (0xc0009f13f0) Reply frame received for 1 I0321 11:07:38.537610 6 log.go:172] (0xc0009f13f0) (0xc001ff1f40) Create stream I0321 11:07:38.537622 6 log.go:172] (0xc0009f13f0) (0xc001ff1f40) Stream added, broadcasting: 3 I0321 11:07:38.538622 6 log.go:172] (0xc0009f13f0) Reply frame received for 3 I0321 11:07:38.538662 6 log.go:172] (0xc0009f13f0) (0xc000b7c0a0) Create stream I0321 11:07:38.538676 6 log.go:172] (0xc0009f13f0) (0xc000b7c0a0) Stream added, broadcasting: 5 I0321 11:07:38.539548 6 log.go:172] (0xc0009f13f0) Reply frame received for 5 I0321 11:07:39.614555 6 log.go:172] (0xc0009f13f0) Data frame received for 5 I0321 11:07:39.614606 6 log.go:172] (0xc000b7c0a0) (5) Data frame handling I0321 11:07:39.614652 6 log.go:172] (0xc0009f13f0) Data frame received for 3 I0321 11:07:39.614708 6 log.go:172] (0xc001ff1f40) (3) Data frame handling I0321 11:07:39.614747 6 log.go:172] (0xc001ff1f40) (3) Data frame sent I0321 11:07:39.614899 6 log.go:172] (0xc0009f13f0) Data frame received for 3 I0321 11:07:39.614926 6 log.go:172] (0xc001ff1f40) (3) Data frame handling I0321 11:07:39.617028 6 log.go:172] (0xc0009f13f0) Data frame received for 1 I0321 11:07:39.617070 6 log.go:172] (0xc0019255e0) (1) Data frame handling I0321 11:07:39.617102 6 log.go:172] (0xc0019255e0) (1) Data frame sent I0321 11:07:39.617271 6 log.go:172] (0xc0009f13f0) (0xc0019255e0) Stream removed, broadcasting: 1 I0321 11:07:39.617511 6 log.go:172] (0xc0009f13f0) Go away received I0321 11:07:39.617757 6 log.go:172] (0xc0009f13f0) (0xc0019255e0) Stream removed, broadcasting: 1 I0321 11:07:39.617801 6 log.go:172] (0xc0009f13f0) (0xc001ff1f40) Stream removed, broadcasting: 3 I0321 11:07:39.617820 6 log.go:172] (0xc0009f13f0) (0xc000b7c0a0) Stream removed, broadcasting: 5 Mar 21 11:07:39.617: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:07:39.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9825n" for this suite. Mar 21 11:08:01.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:08:01.713: INFO: namespace: e2e-tests-pod-network-test-9825n, resource: bindings, ignored listing per whitelist Mar 21 11:08:01.717: INFO: namespace e2e-tests-pod-network-test-9825n deletion completed in 22.095043515s • [SLOW TEST:48.531 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:08:01.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:08:08.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-4wzgb" for this suite. Mar 21 11:08:30.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:08:30.929: INFO: namespace: e2e-tests-replication-controller-4wzgb, resource: bindings, ignored listing per whitelist Mar 21 11:08:30.990: INFO: namespace e2e-tests-replication-controller-4wzgb deletion completed in 22.111248855s • [SLOW TEST:29.273 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:08:30.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-4c6577fc-6b64-11ea-946c-0242ac11000f STEP: Creating secret with name s-test-opt-upd-4c65786e-6b64-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4c6577fc-6b64-11ea-946c-0242ac11000f STEP: Updating secret s-test-opt-upd-4c65786e-6b64-11ea-946c-0242ac11000f STEP: Creating secret with name s-test-opt-create-4c6578b2-6b64-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:09:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h4rv4" for this suite. Mar 21 11:10:17.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:10:17.770: INFO: namespace: e2e-tests-projected-h4rv4, resource: bindings, ignored listing per whitelist Mar 21 11:10:17.828: INFO: namespace e2e-tests-projected-h4rv4 deletion completed in 22.141050307s • [SLOW TEST:106.838 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:10:17.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 21 11:10:17.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:20.199: INFO: stderr: "" Mar 21 11:10:20.199: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 11:10:20.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:20.382: INFO: stderr: "" Mar 21 11:10:20.382: INFO: stdout: "update-demo-nautilus-hmr7w update-demo-nautilus-svjp5 " Mar 21 11:10:20.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmr7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:20.501: INFO: stderr: "" Mar 21 11:10:20.501: INFO: stdout: "" Mar 21 11:10:20.501: INFO: update-demo-nautilus-hmr7w is created but not running Mar 21 11:10:25.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:25.609: INFO: stderr: "" Mar 21 11:10:25.609: INFO: stdout: "update-demo-nautilus-hmr7w update-demo-nautilus-svjp5 " Mar 21 11:10:25.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmr7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:25.713: INFO: stderr: "" Mar 21 11:10:25.713: INFO: stdout: "true" Mar 21 11:10:25.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmr7w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:25.818: INFO: stderr: "" Mar 21 11:10:25.818: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 11:10:25.818: INFO: validating pod update-demo-nautilus-hmr7w Mar 21 11:10:25.823: INFO: got data: { "image": "nautilus.jpg" } Mar 21 11:10:25.823: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 11:10:25.823: INFO: update-demo-nautilus-hmr7w is verified up and running Mar 21 11:10:25.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svjp5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:25.918: INFO: stderr: "" Mar 21 11:10:25.918: INFO: stdout: "true" Mar 21 11:10:25.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svjp5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:26.016: INFO: stderr: "" Mar 21 11:10:26.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 11:10:26.016: INFO: validating pod update-demo-nautilus-svjp5 Mar 21 11:10:26.020: INFO: got data: { "image": "nautilus.jpg" } Mar 21 11:10:26.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 11:10:26.020: INFO: update-demo-nautilus-svjp5 is verified up and running STEP: rolling-update to new replication controller Mar 21 11:10:26.023: INFO: scanned /root for discovery docs: Mar 21 11:10:26.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:48.577: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 21 11:10:48.577: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 11:10:48.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:48.689: INFO: stderr: "" Mar 21 11:10:48.689: INFO: stdout: "update-demo-kitten-45vpn update-demo-kitten-wntpl " Mar 21 11:10:48.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-45vpn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:48.781: INFO: stderr: "" Mar 21 11:10:48.781: INFO: stdout: "true" Mar 21 11:10:48.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-45vpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:48.882: INFO: stderr: "" Mar 21 11:10:48.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 21 11:10:48.882: INFO: validating pod update-demo-kitten-45vpn Mar 21 11:10:48.886: INFO: got data: { "image": "kitten.jpg" } Mar 21 11:10:48.886: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 21 11:10:48.886: INFO: update-demo-kitten-45vpn is verified up and running Mar 21 11:10:48.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wntpl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:48.983: INFO: stderr: "" Mar 21 11:10:48.983: INFO: stdout: "true" Mar 21 11:10:48.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wntpl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jtwrk' Mar 21 11:10:49.078: INFO: stderr: "" Mar 21 11:10:49.078: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 21 11:10:49.078: INFO: validating pod update-demo-kitten-wntpl Mar 21 11:10:49.083: INFO: got data: { "image": "kitten.jpg" } Mar 21 11:10:49.083: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 21 11:10:49.083: INFO: update-demo-kitten-wntpl is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:10:49.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jtwrk" for this suite. Mar 21 11:11:13.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:11:13.109: INFO: namespace: e2e-tests-kubectl-jtwrk, resource: bindings, ignored listing per whitelist Mar 21 11:11:13.196: INFO: namespace e2e-tests-kubectl-jtwrk deletion completed in 24.11003236s • [SLOW TEST:55.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:11:13.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ad127ed1-6b64-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:11:13.316: INFO: Waiting up to 5m0s for pod "pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-lws75" to be "success or failure" Mar 21 11:11:13.320: INFO: Pod "pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642951ms Mar 21 11:11:15.323: INFO: Pod "pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007332802s Mar 21 11:11:17.328: INFO: Pod "pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011577089s STEP: Saw pod success Mar 21 11:11:17.328: INFO: Pod "pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:11:17.331: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 11:11:17.360: INFO: Waiting for pod pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f to disappear Mar 21 11:11:17.372: INFO: Pod pod-secrets-ad1572ab-6b64-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:11:17.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lws75" for this suite. Mar 21 11:11:23.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:11:23.402: INFO: namespace: e2e-tests-secrets-lws75, resource: bindings, ignored listing per whitelist Mar 21 11:11:23.494: INFO: namespace e2e-tests-secrets-lws75 deletion completed in 6.119113394s • [SLOW TEST:10.297 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:11:23.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f Mar 21 11:11:23.636: INFO: Pod name my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f: Found 0 pods out of 1 Mar 21 11:11:28.640: INFO: Pod name my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f: Found 1 pods out of 1 Mar 21 11:11:28.640: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f" are running Mar 21 11:11:28.643: INFO: Pod "my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f-ntxf7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:11:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:11:26 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:11:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-21 11:11:23 +0000 UTC Reason: Message:}]) Mar 21 11:11:28.643: INFO: Trying to dial the pod Mar 21 11:11:33.666: INFO: Controller my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f-ntxf7]: "my-hostname-basic-b33b833a-6b64-11ea-946c-0242ac11000f-ntxf7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:11:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-7d25n" for this suite. Mar 21 11:11:39.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:11:39.709: INFO: namespace: e2e-tests-replication-controller-7d25n, resource: bindings, ignored listing per whitelist Mar 21 11:11:39.767: INFO: namespace e2e-tests-replication-controller-7d25n deletion completed in 6.0966009s • [SLOW TEST:16.273 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:11:39.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 11:11:39.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-b9vqh' Mar 21 11:11:40.006: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 11:11:40.006: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 21 11:11:42.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-b9vqh' Mar 21 11:11:42.199: INFO: stderr: "" Mar 21 11:11:42.199: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:11:42.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b9vqh" for this suite. Mar 21 11:13:04.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:13:04.276: INFO: namespace: e2e-tests-kubectl-b9vqh, resource: bindings, ignored listing per whitelist Mar 21 11:13:04.296: INFO: namespace e2e-tests-kubectl-b9vqh deletion completed in 1m22.088216584s • [SLOW TEST:84.528 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:13:04.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ef4a7746-6b64-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:13:04.407: INFO: Waiting up to 5m0s for pod "pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-sm2sj" to be "success or failure" Mar 21 11:13:04.427: INFO: Pod "pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.107019ms Mar 21 11:13:06.431: INFO: Pod "pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023358344s Mar 21 11:13:08.435: INFO: Pod "pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027614829s STEP: Saw pod success Mar 21 11:13:08.435: INFO: Pod "pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:13:08.438: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 11:13:08.456: INFO: Waiting for pod pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f to disappear Mar 21 11:13:08.459: INFO: Pod pod-secrets-ef4e7c78-6b64-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:13:08.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sm2sj" for this suite. Mar 21 11:13:14.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:13:14.487: INFO: namespace: e2e-tests-secrets-sm2sj, resource: bindings, ignored listing per whitelist Mar 21 11:13:14.555: INFO: namespace e2e-tests-secrets-sm2sj deletion completed in 6.091575519s • [SLOW TEST:10.259 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:13:14.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 21 11:13:22.797: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 11:13:22.801: INFO: Pod pod-with-poststart-http-hook still exists Mar 21 11:13:24.801: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 11:13:24.804: INFO: Pod pod-with-poststart-http-hook still exists Mar 21 11:13:26.801: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 21 11:13:26.807: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:13:26.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5cpdx" for this suite. Mar 21 11:13:48.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:13:48.880: INFO: namespace: e2e-tests-container-lifecycle-hook-5cpdx, resource: bindings, ignored listing per whitelist Mar 21 11:13:48.911: INFO: namespace e2e-tests-container-lifecycle-hook-5cpdx deletion completed in 22.100363733s • [SLOW TEST:34.357 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:13:48.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:13:49.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9nv6v" for this suite. Mar 21 11:13:55.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:13:55.118: INFO: namespace: e2e-tests-services-9nv6v, resource: bindings, ignored listing per whitelist Mar 21 11:13:55.137: INFO: namespace e2e-tests-services-9nv6v deletion completed in 6.103041531s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.225 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:13:55.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 21 11:13:55.262: INFO: Waiting up to 5m0s for pod "downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-tzx22" to be "success or failure" Mar 21 11:13:55.265: INFO: Pod "downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072808ms Mar 21 11:13:57.268: INFO: Pod "downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0068162s Mar 21 11:13:59.273: INFO: Pod "downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011203534s STEP: Saw pod success Mar 21 11:13:59.273: INFO: Pod "downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:13:59.276: INFO: Trying to get logs from node hunter-worker pod downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 11:13:59.308: INFO: Waiting for pod downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f to disappear Mar 21 11:13:59.322: INFO: Pod downward-api-0d9a474d-6b65-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:13:59.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tzx22" for this suite. Mar 21 11:14:05.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:14:05.397: INFO: namespace: e2e-tests-downward-api-tzx22, resource: bindings, ignored listing per whitelist Mar 21 11:14:05.417: INFO: namespace e2e-tests-downward-api-tzx22 deletion completed in 6.091395319s • [SLOW TEST:10.280 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:14:05.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 11:14:05.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-g2npr' Mar 21 11:14:05.657: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 11:14:05.657: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 21 11:14:09.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-g2npr' Mar 21 11:14:09.825: INFO: stderr: "" Mar 21 11:14:09.825: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:14:09.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g2npr" for this suite. Mar 21 11:14:15.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:14:15.927: INFO: namespace: e2e-tests-kubectl-g2npr, resource: bindings, ignored listing per whitelist Mar 21 11:14:15.938: INFO: namespace e2e-tests-kubectl-g2npr deletion completed in 6.091301194s • [SLOW TEST:10.521 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:14:15.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-p9lr STEP: Creating a pod to test atomic-volume-subpath Mar 21 11:14:16.074: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p9lr" in namespace "e2e-tests-subpath-q2xrt" to be "success or failure" Mar 21 11:14:16.078: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.85033ms Mar 21 11:14:18.081: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007285879s Mar 21 11:14:20.095: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021789203s Mar 21 11:14:22.100: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 6.026008807s Mar 21 11:14:24.103: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 8.029729765s Mar 21 11:14:26.107: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 10.033466787s Mar 21 11:14:28.122: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 12.04818645s Mar 21 11:14:30.125: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 14.051486464s Mar 21 11:14:32.129: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 16.05539048s Mar 21 11:14:34.152: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 18.077888754s Mar 21 11:14:36.170: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 20.096481458s Mar 21 11:14:38.174: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 22.10074794s Mar 21 11:14:40.320: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Running", Reason="", readiness=false. Elapsed: 24.246367471s Mar 21 11:14:42.324: INFO: Pod "pod-subpath-test-downwardapi-p9lr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.250740196s STEP: Saw pod success Mar 21 11:14:42.324: INFO: Pod "pod-subpath-test-downwardapi-p9lr" satisfied condition "success or failure" Mar 21 11:14:42.327: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-p9lr container test-container-subpath-downwardapi-p9lr: STEP: delete the pod Mar 21 11:14:42.359: INFO: Waiting for pod pod-subpath-test-downwardapi-p9lr to disappear Mar 21 11:14:42.398: INFO: Pod pod-subpath-test-downwardapi-p9lr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-p9lr Mar 21 11:14:42.398: INFO: Deleting pod "pod-subpath-test-downwardapi-p9lr" in namespace "e2e-tests-subpath-q2xrt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:14:42.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-q2xrt" for this suite. Mar 21 11:14:48.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:14:48.462: INFO: namespace: e2e-tests-subpath-q2xrt, resource: bindings, ignored listing per whitelist Mar 21 11:14:48.529: INFO: namespace e2e-tests-subpath-q2xrt deletion completed in 6.124897373s • [SLOW TEST:32.591 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:14:48.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:15:48.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rr2f8" for this suite. Mar 21 11:16:10.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:16:10.743: INFO: namespace: e2e-tests-container-probe-rr2f8, resource: bindings, ignored listing per whitelist Mar 21 11:16:10.823: INFO: namespace e2e-tests-container-probe-rr2f8 deletion completed in 22.128191469s • [SLOW TEST:82.294 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:16:10.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 21 11:16:15.470: INFO: Successfully updated pod "pod-update-5e7d7198-6b65-11ea-946c-0242ac11000f" STEP: verifying the updated pod is in kubernetes Mar 21 11:16:15.483: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:16:15.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-h82bw" for this suite. Mar 21 11:16:37.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:16:37.573: INFO: namespace: e2e-tests-pods-h82bw, resource: bindings, ignored listing per whitelist Mar 21 11:16:37.573: INFO: namespace e2e-tests-pods-h82bw deletion completed in 22.086935464s • [SLOW TEST:26.749 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:16:37.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 21 11:16:37.670: INFO: Waiting up to 5m0s for pod "downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-5fssl" to be "success or failure" Mar 21 11:16:37.674: INFO: Pod "downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.878575ms Mar 21 11:16:39.678: INFO: Pod "downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007704714s Mar 21 11:16:41.682: INFO: Pod "downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012120222s STEP: Saw pod success Mar 21 11:16:41.682: INFO: Pod "downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:16:41.685: INFO: Trying to get logs from node hunter-worker pod downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 11:16:41.712: INFO: Waiting for pod downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f to disappear Mar 21 11:16:41.716: INFO: Pod downward-api-6e6aac43-6b65-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:16:41.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5fssl" for this suite. Mar 21 11:16:47.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:16:47.796: INFO: namespace: e2e-tests-downward-api-5fssl, resource: bindings, ignored listing per whitelist Mar 21 11:16:47.820: INFO: namespace e2e-tests-downward-api-5fssl deletion completed in 6.101187631s • [SLOW TEST:10.247 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:16:47.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:16:47.984: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 21 11:16:52.988: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 11:16:52.988: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 21 11:16:53.009: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6wqc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wqc6/deployments/test-cleanup-deployment,UID:778efb9e-6b65-11ea-99e8-0242ac110002,ResourceVersion:1011623,Generation:1,CreationTimestamp:2020-03-21 11:16:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 21 11:16:53.034: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 21 11:16:53.034: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 21 11:16:53.035: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-6wqc6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6wqc6/replicasets/test-cleanup-controller,UID:748f69d7-6b65-11ea-99e8-0242ac110002,ResourceVersion:1011624,Generation:1,CreationTimestamp:2020-03-21 11:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 778efb9e-6b65-11ea-99e8-0242ac110002 0xc00214c6a7 0xc00214c6a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 21 11:16:53.057: INFO: Pod "test-cleanup-controller-bfbsf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bfbsf,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-6wqc6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6wqc6/pods/test-cleanup-controller-bfbsf,UID:7493029a-6b65-11ea-99e8-0242ac110002,ResourceVersion:1011615,Generation:0,CreationTimestamp:2020-03-21 11:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 748f69d7-6b65-11ea-99e8-0242ac110002 0xc001d634f7 0xc001d634f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qj2qg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qj2qg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qj2qg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d63570} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d63590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:16:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:16:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:16:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:16:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.197,StartTime:2020-03-21 11:16:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:16:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d2ab73d59623e8df400e692ca9ac6220d6eaa65211ada89946074a62f01aba04}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:16:53.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6wqc6" for this suite. Mar 21 11:16:59.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:16:59.221: INFO: namespace: e2e-tests-deployment-6wqc6, resource: bindings, ignored listing per whitelist Mar 21 11:16:59.244: INFO: namespace e2e-tests-deployment-6wqc6 deletion completed in 6.153406121s • [SLOW TEST:11.423 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:16:59.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:16:59.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-d2ljz" to be "success or failure" Mar 21 11:16:59.351: INFO: Pod "downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.858969ms Mar 21 11:17:01.355: INFO: Pod "downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008177278s Mar 21 11:17:03.359: INFO: Pod "downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012454574s STEP: Saw pod success Mar 21 11:17:03.359: INFO: Pod "downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:17:03.362: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:17:03.393: INFO: Waiting for pod downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f to disappear Mar 21 11:17:03.411: INFO: Pod downwardapi-volume-7b55cc8d-6b65-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:17:03.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d2ljz" for this suite. Mar 21 11:17:09.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:17:09.451: INFO: namespace: e2e-tests-downward-api-d2ljz, resource: bindings, ignored listing per whitelist Mar 21 11:17:09.499: INFO: namespace e2e-tests-downward-api-d2ljz deletion completed in 6.085179461s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:17:09.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 21 11:17:13.637: INFO: Pod pod-hostip-8175654d-6b65-11ea-946c-0242ac11000f has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:17:13.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-m245s" for this suite. Mar 21 11:17:35.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:17:35.694: INFO: namespace: e2e-tests-pods-m245s, resource: bindings, ignored listing per whitelist Mar 21 11:17:35.733: INFO: namespace e2e-tests-pods-m245s deletion completed in 22.091934392s • [SLOW TEST:26.234 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:17:35.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 21 11:17:35.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2k425' Mar 21 11:17:36.079: INFO: stderr: "" Mar 21 11:17:36.079: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 21 11:17:37.083: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:17:37.083: INFO: Found 0 / 1 Mar 21 11:17:38.084: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:17:38.084: INFO: Found 0 / 1 Mar 21 11:17:39.084: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:17:39.084: INFO: Found 0 / 1 Mar 21 11:17:40.084: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:17:40.084: INFO: Found 1 / 1 Mar 21 11:17:40.084: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 21 11:17:40.088: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:17:40.088: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 21 11:17:40.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425' Mar 21 11:17:40.199: INFO: stderr: "" Mar 21 11:17:40.199: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Mar 11:17:38.365 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Mar 11:17:38.365 # Server started, Redis version 3.2.12\n1:M 21 Mar 11:17:38.365 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Mar 11:17:38.365 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 21 11:17:40.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425 --tail=1' Mar 21 11:17:40.319: INFO: stderr: "" Mar 21 11:17:40.319: INFO: stdout: "1:M 21 Mar 11:17:38.365 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 21 11:17:40.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425 --limit-bytes=1' Mar 21 11:17:40.428: INFO: stderr: "" Mar 21 11:17:40.428: INFO: stdout: " " STEP: exposing timestamps Mar 21 11:17:40.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425 --tail=1 --timestamps' Mar 21 11:17:40.537: INFO: stderr: "" Mar 21 11:17:40.537: INFO: stdout: "2020-03-21T11:17:38.365932193Z 1:M 21 Mar 11:17:38.365 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 21 11:17:43.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425 --since=1s' Mar 21 11:17:43.146: INFO: stderr: "" Mar 21 11:17:43.146: INFO: stdout: "" Mar 21 11:17:43.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kzzcl redis-master --namespace=e2e-tests-kubectl-2k425 --since=24h' Mar 21 11:17:43.276: INFO: stderr: "" Mar 21 11:17:43.276: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Mar 11:17:38.365 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Mar 11:17:38.365 # Server started, Redis version 3.2.12\n1:M 21 Mar 11:17:38.365 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Mar 11:17:38.365 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 21 11:17:43.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2k425' Mar 21 11:17:43.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:17:43.399: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 21 11:17:43.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-2k425' Mar 21 11:17:43.529: INFO: stderr: "No resources found.\n" Mar 21 11:17:43.529: INFO: stdout: "" Mar 21 11:17:43.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-2k425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 11:17:43.632: INFO: stderr: "" Mar 21 11:17:43.632: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:17:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2k425" for this suite. Mar 21 11:18:05.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:18:05.802: INFO: namespace: e2e-tests-kubectl-2k425, resource: bindings, ignored listing per whitelist Mar 21 11:18:05.853: INFO: namespace e2e-tests-kubectl-2k425 deletion completed in 22.217862408s • [SLOW TEST:30.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:18:05.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 11:18:05.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:06.111: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 11:18:06.111: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Mar 21 11:18:06.152: INFO: scanned /root for discovery docs: Mar 21 11:18:06.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:21.966: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 21 11:18:21.966: INFO: stdout: "Created e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd\nScaling up e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 21 11:18:21.966: INFO: stdout: "Created e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd\nScaling up e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 21 11:18:21.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:22.071: INFO: stderr: "" Mar 21 11:18:22.071: INFO: stdout: "e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd-pf8z9 " Mar 21 11:18:22.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd-pf8z9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:22.172: INFO: stderr: "" Mar 21 11:18:22.172: INFO: stdout: "true" Mar 21 11:18:22.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd-pf8z9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:22.277: INFO: stderr: "" Mar 21 11:18:22.278: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 21 11:18:22.278: INFO: e2e-test-nginx-rc-e21b2fc2db2c3cd6321c0f3bd8cb82bd-pf8z9 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 21 11:18:22.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8n58r' Mar 21 11:18:22.379: INFO: stderr: "" Mar 21 11:18:22.379: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:18:22.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8n58r" for this suite. Mar 21 11:18:28.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:18:28.448: INFO: namespace: e2e-tests-kubectl-8n58r, resource: bindings, ignored listing per whitelist Mar 21 11:18:28.531: INFO: namespace e2e-tests-kubectl-8n58r deletion completed in 6.136369297s • [SLOW TEST:22.678 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:18:28.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:18:28.668: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.002751ms) Mar 21 11:18:28.671: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.852224ms) Mar 21 11:18:28.675: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.659199ms) Mar 21 11:18:28.678: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.090645ms) Mar 21 11:18:28.681: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.104785ms) Mar 21 11:18:28.684: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.042654ms) Mar 21 11:18:28.687: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.367689ms) Mar 21 11:18:28.691: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.069937ms) Mar 21 11:18:28.695: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.907884ms) Mar 21 11:18:28.699: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.673081ms) Mar 21 11:18:28.703: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.772014ms) Mar 21 11:18:28.706: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.541959ms) Mar 21 11:18:28.710: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.383928ms) Mar 21 11:18:28.713: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.543753ms) Mar 21 11:18:28.716: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.665352ms) Mar 21 11:18:28.719: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.220813ms) Mar 21 11:18:28.723: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.264583ms) Mar 21 11:18:28.726: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.996686ms) Mar 21 11:18:28.729: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.302396ms) Mar 21 11:18:28.733: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.407021ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:18:28.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-8n5dg" for this suite. Mar 21 11:18:34.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:18:34.767: INFO: namespace: e2e-tests-proxy-8n5dg, resource: bindings, ignored listing per whitelist Mar 21 11:18:34.818: INFO: namespace e2e-tests-proxy-8n5dg deletion completed in 6.081542112s • [SLOW TEST:6.286 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:18:34.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wswr8 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 11:18:34.930: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 11:18:57.024: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.64:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wswr8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 11:18:57.024: INFO: >>> kubeConfig: /root/.kube/config I0321 11:18:57.061745 6 log.go:172] (0xc0015ac160) (0xc001f4ca00) Create stream I0321 11:18:57.061804 6 log.go:172] (0xc0015ac160) (0xc001f4ca00) Stream added, broadcasting: 1 I0321 11:18:57.064058 6 log.go:172] (0xc0015ac160) Reply frame received for 1 I0321 11:18:57.064090 6 log.go:172] (0xc0015ac160) (0xc00237a500) Create stream I0321 11:18:57.064101 6 log.go:172] (0xc0015ac160) (0xc00237a500) Stream added, broadcasting: 3 I0321 11:18:57.064966 6 log.go:172] (0xc0015ac160) Reply frame received for 3 I0321 11:18:57.064998 6 log.go:172] (0xc0015ac160) (0xc001542be0) Create stream I0321 11:18:57.065008 6 log.go:172] (0xc0015ac160) (0xc001542be0) Stream added, broadcasting: 5 I0321 11:18:57.066055 6 log.go:172] (0xc0015ac160) Reply frame received for 5 I0321 11:18:57.160103 6 log.go:172] (0xc0015ac160) Data frame received for 5 I0321 11:18:57.160147 6 log.go:172] (0xc001542be0) (5) Data frame handling I0321 11:18:57.160173 6 log.go:172] (0xc0015ac160) Data frame received for 3 I0321 11:18:57.160204 6 log.go:172] (0xc00237a500) (3) Data frame handling I0321 11:18:57.160288 6 log.go:172] (0xc00237a500) (3) Data frame sent I0321 11:18:57.160321 6 log.go:172] (0xc0015ac160) Data frame received for 3 I0321 11:18:57.160385 6 log.go:172] (0xc00237a500) (3) Data frame handling I0321 11:18:57.161931 6 log.go:172] (0xc0015ac160) Data frame received for 1 I0321 11:18:57.161966 6 log.go:172] (0xc001f4ca00) (1) Data frame handling I0321 11:18:57.161993 6 log.go:172] (0xc001f4ca00) (1) Data frame sent I0321 11:18:57.162010 6 log.go:172] (0xc0015ac160) (0xc001f4ca00) Stream removed, broadcasting: 1 I0321 11:18:57.162024 6 log.go:172] (0xc0015ac160) Go away received I0321 11:18:57.162177 6 log.go:172] (0xc0015ac160) (0xc001f4ca00) Stream removed, broadcasting: 1 I0321 11:18:57.162225 6 log.go:172] (0xc0015ac160) (0xc00237a500) Stream removed, broadcasting: 3 I0321 11:18:57.162250 6 log.go:172] (0xc0015ac160) (0xc001542be0) Stream removed, broadcasting: 5 Mar 21 11:18:57.162: INFO: Found all expected endpoints: [netserver-0] Mar 21 11:18:57.165: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.201:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wswr8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 11:18:57.165: INFO: >>> kubeConfig: /root/.kube/config I0321 11:18:57.199642 6 log.go:172] (0xc001d4a2c0) (0xc001542fa0) Create stream I0321 11:18:57.199673 6 log.go:172] (0xc001d4a2c0) (0xc001542fa0) Stream added, broadcasting: 1 I0321 11:18:57.205575 6 log.go:172] (0xc001d4a2c0) Reply frame received for 1 I0321 11:18:57.205651 6 log.go:172] (0xc001d4a2c0) (0xc0012ca140) Create stream I0321 11:18:57.205684 6 log.go:172] (0xc001d4a2c0) (0xc0012ca140) Stream added, broadcasting: 3 I0321 11:18:57.206747 6 log.go:172] (0xc001d4a2c0) Reply frame received for 3 I0321 11:18:57.206775 6 log.go:172] (0xc001d4a2c0) (0xc001f4caa0) Create stream I0321 11:18:57.206784 6 log.go:172] (0xc001d4a2c0) (0xc001f4caa0) Stream added, broadcasting: 5 I0321 11:18:57.207855 6 log.go:172] (0xc001d4a2c0) Reply frame received for 5 I0321 11:18:57.271043 6 log.go:172] (0xc001d4a2c0) Data frame received for 3 I0321 11:18:57.271129 6 log.go:172] (0xc0012ca140) (3) Data frame handling I0321 11:18:57.271169 6 log.go:172] (0xc0012ca140) (3) Data frame sent I0321 11:18:57.271360 6 log.go:172] (0xc001d4a2c0) Data frame received for 3 I0321 11:18:57.271395 6 log.go:172] (0xc0012ca140) (3) Data frame handling I0321 11:18:57.271421 6 log.go:172] (0xc001d4a2c0) Data frame received for 5 I0321 11:18:57.271447 6 log.go:172] (0xc001f4caa0) (5) Data frame handling I0321 11:18:57.272710 6 log.go:172] (0xc001d4a2c0) Data frame received for 1 I0321 11:18:57.272727 6 log.go:172] (0xc001542fa0) (1) Data frame handling I0321 11:18:57.272736 6 log.go:172] (0xc001542fa0) (1) Data frame sent I0321 11:18:57.272750 6 log.go:172] (0xc001d4a2c0) (0xc001542fa0) Stream removed, broadcasting: 1 I0321 11:18:57.272824 6 log.go:172] (0xc001d4a2c0) (0xc001542fa0) Stream removed, broadcasting: 1 I0321 11:18:57.272847 6 log.go:172] (0xc001d4a2c0) Go away received I0321 11:18:57.272882 6 log.go:172] (0xc001d4a2c0) (0xc0012ca140) Stream removed, broadcasting: 3 I0321 11:18:57.272913 6 log.go:172] (0xc001d4a2c0) (0xc001f4caa0) Stream removed, broadcasting: 5 Mar 21 11:18:57.272: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:18:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-wswr8" for this suite. Mar 21 11:19:19.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:19:19.365: INFO: namespace: e2e-tests-pod-network-test-wswr8, resource: bindings, ignored listing per whitelist Mar 21 11:19:19.392: INFO: namespace e2e-tests-pod-network-test-wswr8 deletion completed in 22.114960519s • [SLOW TEST:44.574 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:19:19.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 11:19:19.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dq5mx' Mar 21 11:19:19.576: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 11:19:19.576: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 21 11:19:21.602: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-vzkh2] Mar 21 11:19:21.602: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-vzkh2" in namespace "e2e-tests-kubectl-dq5mx" to be "running and ready" Mar 21 11:19:21.604: INFO: Pod "e2e-test-nginx-rc-vzkh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341243ms Mar 21 11:19:23.608: INFO: Pod "e2e-test-nginx-rc-vzkh2": Phase="Running", Reason="", readiness=true. Elapsed: 2.006488438s Mar 21 11:19:23.608: INFO: Pod "e2e-test-nginx-rc-vzkh2" satisfied condition "running and ready" Mar 21 11:19:23.608: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-vzkh2] Mar 21 11:19:23.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dq5mx' Mar 21 11:19:23.728: INFO: stderr: "" Mar 21 11:19:23.728: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 21 11:19:23.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dq5mx' Mar 21 11:19:23.831: INFO: stderr: "" Mar 21 11:19:23.831: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:19:23.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dq5mx" for this suite. Mar 21 11:19:45.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:19:45.897: INFO: namespace: e2e-tests-kubectl-dq5mx, resource: bindings, ignored listing per whitelist Mar 21 11:19:45.980: INFO: namespace e2e-tests-kubectl-dq5mx deletion completed in 22.145159082s • [SLOW TEST:26.588 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:19:45.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 21 11:19:46.083: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix693536534/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:19:46.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m2b9t" for this suite. Mar 21 11:19:52.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:19:52.238: INFO: namespace: e2e-tests-kubectl-m2b9t, resource: bindings, ignored listing per whitelist Mar 21 11:19:52.260: INFO: namespace e2e-tests-kubectl-m2b9t deletion completed in 6.097697306s • [SLOW TEST:6.280 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:19:52.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 11:19:52.406: INFO: Waiting up to 5m0s for pod "pod-e27c89b2-6b65-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-5wb5z" to be "success or failure" Mar 21 11:19:52.409: INFO: Pod "pod-e27c89b2-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.749911ms Mar 21 11:19:54.413: INFO: Pod "pod-e27c89b2-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006928164s Mar 21 11:19:56.416: INFO: Pod "pod-e27c89b2-6b65-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010537595s STEP: Saw pod success Mar 21 11:19:56.416: INFO: Pod "pod-e27c89b2-6b65-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:19:56.419: INFO: Trying to get logs from node hunter-worker2 pod pod-e27c89b2-6b65-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 11:19:56.455: INFO: Waiting for pod pod-e27c89b2-6b65-11ea-946c-0242ac11000f to disappear Mar 21 11:19:56.469: INFO: Pod pod-e27c89b2-6b65-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:19:56.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5wb5z" for this suite. Mar 21 11:20:02.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:20:02.535: INFO: namespace: e2e-tests-emptydir-5wb5z, resource: bindings, ignored listing per whitelist Mar 21 11:20:02.567: INFO: namespace e2e-tests-emptydir-5wb5z deletion completed in 6.094429538s • [SLOW TEST:10.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:20:02.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 21 11:20:02.668: INFO: Waiting up to 5m0s for pod "var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f" in namespace "e2e-tests-var-expansion-xr999" to be "success or failure" Mar 21 11:20:02.673: INFO: Pod "var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.763969ms Mar 21 11:20:04.677: INFO: Pod "var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008962737s Mar 21 11:20:06.682: INFO: Pod "var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013478879s STEP: Saw pod success Mar 21 11:20:06.682: INFO: Pod "var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:20:06.685: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 11:20:06.727: INFO: Waiting for pod var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f to disappear Mar 21 11:20:06.736: INFO: Pod var-expansion-e89ae072-6b65-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:20:06.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xr999" for this suite. Mar 21 11:20:12.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:20:12.838: INFO: namespace: e2e-tests-var-expansion-xr999, resource: bindings, ignored listing per whitelist Mar 21 11:20:12.885: INFO: namespace e2e-tests-var-expansion-xr999 deletion completed in 6.127962768s • [SLOW TEST:10.318 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:20:12.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:20:17.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-cwds6" for this suite. Mar 21 11:21:07.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:21:07.072: INFO: namespace: e2e-tests-kubelet-test-cwds6, resource: bindings, ignored listing per whitelist Mar 21 11:21:07.155: INFO: namespace e2e-tests-kubelet-test-cwds6 deletion completed in 50.112537301s • [SLOW TEST:54.270 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:21:07.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-0f1dd901-6b66-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:21:07.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-frwgp" to be "success or failure" Mar 21 11:21:07.304: INFO: Pod "pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.816794ms Mar 21 11:21:09.308: INFO: Pod "pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02980733s Mar 21 11:21:11.312: INFO: Pod "pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03393286s STEP: Saw pod success Mar 21 11:21:11.313: INFO: Pod "pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:21:11.316: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 11:21:11.346: INFO: Waiting for pod pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f to disappear Mar 21 11:21:11.357: INFO: Pod pod-configmaps-0f1e6f1b-6b66-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:21:11.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-frwgp" for this suite. Mar 21 11:21:17.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:21:17.436: INFO: namespace: e2e-tests-configmap-frwgp, resource: bindings, ignored listing per whitelist Mar 21 11:21:17.450: INFO: namespace e2e-tests-configmap-frwgp deletion completed in 6.090240906s • [SLOW TEST:10.295 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:21:17.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:21:17.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 21 11:21:17.666: INFO: stderr: "" Mar 21 11:21:17.666: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:21:17.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bqk2c" for this suite. Mar 21 11:21:23.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:21:23.769: INFO: namespace: e2e-tests-kubectl-bqk2c, resource: bindings, ignored listing per whitelist Mar 21 11:21:23.784: INFO: namespace e2e-tests-kubectl-bqk2c deletion completed in 6.112849879s • [SLOW TEST:6.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:21:23.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 21 11:21:23.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:26.158: INFO: stderr: "" Mar 21 11:21:26.158: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 11:21:26.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:26.272: INFO: stderr: "" Mar 21 11:21:26.272: INFO: stdout: "update-demo-nautilus-nkn72 update-demo-nautilus-sj2cn " Mar 21 11:21:26.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkn72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:26.358: INFO: stderr: "" Mar 21 11:21:26.358: INFO: stdout: "" Mar 21 11:21:26.358: INFO: update-demo-nautilus-nkn72 is created but not running Mar 21 11:21:31.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.460: INFO: stderr: "" Mar 21 11:21:31.460: INFO: stdout: "update-demo-nautilus-nkn72 update-demo-nautilus-sj2cn " Mar 21 11:21:31.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkn72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.558: INFO: stderr: "" Mar 21 11:21:31.558: INFO: stdout: "true" Mar 21 11:21:31.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nkn72 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.663: INFO: stderr: "" Mar 21 11:21:31.663: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 11:21:31.663: INFO: validating pod update-demo-nautilus-nkn72 Mar 21 11:21:31.668: INFO: got data: { "image": "nautilus.jpg" } Mar 21 11:21:31.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 11:21:31.668: INFO: update-demo-nautilus-nkn72 is verified up and running Mar 21 11:21:31.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sj2cn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.766: INFO: stderr: "" Mar 21 11:21:31.766: INFO: stdout: "true" Mar 21 11:21:31.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sj2cn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.858: INFO: stderr: "" Mar 21 11:21:31.858: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 11:21:31.858: INFO: validating pod update-demo-nautilus-sj2cn Mar 21 11:21:31.861: INFO: got data: { "image": "nautilus.jpg" } Mar 21 11:21:31.861: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 11:21:31.861: INFO: update-demo-nautilus-sj2cn is verified up and running STEP: using delete to clean up resources Mar 21 11:21:31.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:31.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:21:31.958: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 21 11:21:31.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-52rv4' Mar 21 11:21:32.069: INFO: stderr: "No resources found.\n" Mar 21 11:21:32.069: INFO: stdout: "" Mar 21 11:21:32.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-52rv4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 11:21:32.165: INFO: stderr: "" Mar 21 11:21:32.165: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:21:32.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-52rv4" for this suite. Mar 21 11:21:54.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:21:54.274: INFO: namespace: e2e-tests-kubectl-52rv4, resource: bindings, ignored listing per whitelist Mar 21 11:21:54.316: INFO: namespace e2e-tests-kubectl-52rv4 deletion completed in 22.147321354s • [SLOW TEST:30.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:21:54.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:21:54.542: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2b3f40fa-6b66-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023e9792), BlockOwnerDeletion:(*bool)(0xc0023e9793)}} Mar 21 11:21:54.606: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2b3e102a-6b66-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023e993a), BlockOwnerDeletion:(*bool)(0xc0023e993b)}} Mar 21 11:21:54.630: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2b3e94bc-6b66-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001c0eb46), BlockOwnerDeletion:(*bool)(0xc001c0eb47)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:21:59.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xftb4" for this suite. Mar 21 11:22:05.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:05.780: INFO: namespace: e2e-tests-gc-xftb4, resource: bindings, ignored listing per whitelist Mar 21 11:22:05.826: INFO: namespace e2e-tests-gc-xftb4 deletion completed in 6.143024211s • [SLOW TEST:11.509 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:22:05.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:22:05.941: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.706417ms) Mar 21 11:22:05.945: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.335494ms) Mar 21 11:22:05.948: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.297279ms) Mar 21 11:22:05.952: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.627727ms) Mar 21 11:22:05.955: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.175655ms) Mar 21 11:22:05.958: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.039947ms) Mar 21 11:22:05.961: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.281357ms) Mar 21 11:22:05.965: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.17837ms) Mar 21 11:22:05.968: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.344298ms) Mar 21 11:22:05.971: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.287663ms) Mar 21 11:22:05.975: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.297444ms) Mar 21 11:22:05.977: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.830812ms) Mar 21 11:22:05.981: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.100441ms) Mar 21 11:22:05.984: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.427083ms) Mar 21 11:22:05.987: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.166205ms) Mar 21 11:22:05.990: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.911888ms) Mar 21 11:22:05.994: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.544455ms) Mar 21 11:22:05.997: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.384718ms) Mar 21 11:22:06.001: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.404174ms) Mar 21 11:22:06.004: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.810681ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:22:06.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-9tcjp" for this suite. Mar 21 11:22:12.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:12.080: INFO: namespace: e2e-tests-proxy-9tcjp, resource: bindings, ignored listing per whitelist Mar 21 11:22:12.104: INFO: namespace e2e-tests-proxy-9tcjp deletion completed in 6.096602632s • [SLOW TEST:6.278 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:22:12.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-35cee3c1-6b66-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:22:12.218: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-xn2db" to be "success or failure" Mar 21 11:22:12.234: INFO: Pod "pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.893749ms Mar 21 11:22:14.238: INFO: Pod "pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019626752s Mar 21 11:22:16.241: INFO: Pod "pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023414117s STEP: Saw pod success Mar 21 11:22:16.242: INFO: Pod "pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:22:16.244: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 21 11:22:16.270: INFO: Waiting for pod pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f to disappear Mar 21 11:22:16.287: INFO: Pod pod-projected-secrets-35d3b5a2-6b66-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:22:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xn2db" for this suite. Mar 21 11:22:22.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:22.315: INFO: namespace: e2e-tests-projected-xn2db, resource: bindings, ignored listing per whitelist Mar 21 11:22:22.385: INFO: namespace e2e-tests-projected-xn2db deletion completed in 6.094393946s • [SLOW TEST:10.280 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:22:22.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:22:28.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-sp8z4" for this suite. Mar 21 11:22:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:34.742: INFO: namespace: e2e-tests-namespaces-sp8z4, resource: bindings, ignored listing per whitelist Mar 21 11:22:34.742: INFO: namespace e2e-tests-namespaces-sp8z4 deletion completed in 6.08912971s STEP: Destroying namespace "e2e-tests-nsdeletetest-rxx4r" for this suite. Mar 21 11:22:34.744: INFO: Namespace e2e-tests-nsdeletetest-rxx4r was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-xcgdx" for this suite. Mar 21 11:22:40.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:40.834: INFO: namespace: e2e-tests-nsdeletetest-xcgdx, resource: bindings, ignored listing per whitelist Mar 21 11:22:40.856: INFO: namespace e2e-tests-nsdeletetest-xcgdx deletion completed in 6.112194001s • [SLOW TEST:18.471 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:22:40.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-46f9ebd0-6b66-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:22:41.006: INFO: Waiting up to 5m0s for pod "pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-8nmpw" to be "success or failure" Mar 21 11:22:41.025: INFO: Pod "pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.348113ms Mar 21 11:22:43.039: INFO: Pod "pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033623429s Mar 21 11:22:45.043: INFO: Pod "pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037782181s STEP: Saw pod success Mar 21 11:22:45.043: INFO: Pod "pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:22:45.046: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f container secret-env-test: STEP: delete the pod Mar 21 11:22:45.068: INFO: Waiting for pod pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f to disappear Mar 21 11:22:45.073: INFO: Pod pod-secrets-46fbfc2e-6b66-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:22:45.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8nmpw" for this suite. Mar 21 11:22:51.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:22:51.129: INFO: namespace: e2e-tests-secrets-8nmpw, resource: bindings, ignored listing per whitelist Mar 21 11:22:51.188: INFO: namespace e2e-tests-secrets-8nmpw deletion completed in 6.111176326s • [SLOW TEST:10.331 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:22:51.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4d1dd4ae-6b66-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:22:51.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-mktmg" to be "success or failure" Mar 21 11:22:51.350: INFO: Pod "pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.542855ms Mar 21 11:22:53.354: INFO: Pod "pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02907989s Mar 21 11:22:55.357: INFO: Pod "pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032601688s STEP: Saw pod success Mar 21 11:22:55.357: INFO: Pod "pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:22:55.359: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 11:22:55.392: INFO: Waiting for pod pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f to disappear Mar 21 11:22:55.402: INFO: Pod pod-projected-configmaps-4d2329ba-6b66-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:22:55.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mktmg" for this suite. Mar 21 11:23:01.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:23:01.463: INFO: namespace: e2e-tests-projected-mktmg, resource: bindings, ignored listing per whitelist Mar 21 11:23:01.508: INFO: namespace e2e-tests-projected-mktmg deletion completed in 6.101787375s • [SLOW TEST:10.319 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:23:01.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 11:23:01.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fx4jg' Mar 21 11:23:01.689: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 21 11:23:01.689: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 21 11:23:01.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-fx4jg' Mar 21 11:23:01.804: INFO: stderr: "" Mar 21 11:23:01.804: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:23:01.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fx4jg" for this suite. Mar 21 11:23:13.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:23:13.886: INFO: namespace: e2e-tests-kubectl-fx4jg, resource: bindings, ignored listing per whitelist Mar 21 11:23:13.926: INFO: namespace e2e-tests-kubectl-fx4jg deletion completed in 12.118526816s • [SLOW TEST:12.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:23:13.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:23:18.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zjddg" for this suite. Mar 21 11:23:24.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:23:24.112: INFO: namespace: e2e-tests-kubelet-test-zjddg, resource: bindings, ignored listing per whitelist Mar 21 11:23:24.166: INFO: namespace e2e-tests-kubelet-test-zjddg deletion completed in 6.118883457s • [SLOW TEST:10.239 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:23:24.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 21 11:23:24.271: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013068,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 11:23:24.272: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013068,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 21 11:23:34.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013088,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 21 11:23:34.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013088,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 21 11:23:44.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013108,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 11:23:44.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013108,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 21 11:23:54.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013128,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 11:23:54.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-a,UID:60c53035-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013128,Generation:0,CreationTimestamp:2020-03-21 11:23:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 21 11:24:04.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-b,UID:78a2843f-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013149,Generation:0,CreationTimestamp:2020-03-21 11:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 11:24:04.303: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-b,UID:78a2843f-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013149,Generation:0,CreationTimestamp:2020-03-21 11:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 21 11:24:14.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-b,UID:78a2843f-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013169,Generation:0,CreationTimestamp:2020-03-21 11:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 11:24:14.310: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-66fdm,SelfLink:/api/v1/namespaces/e2e-tests-watch-66fdm/configmaps/e2e-watch-test-configmap-b,UID:78a2843f-6b66-11ea-99e8-0242ac110002,ResourceVersion:1013169,Generation:0,CreationTimestamp:2020-03-21 11:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:24:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-66fdm" for this suite. Mar 21 11:24:30.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:24:30.403: INFO: namespace: e2e-tests-watch-66fdm, resource: bindings, ignored listing per whitelist Mar 21 11:24:30.407: INFO: namespace e2e-tests-watch-66fdm deletion completed in 6.091558888s • [SLOW TEST:66.241 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:24:30.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 21 11:24:31.232: INFO: Pod name wrapped-volume-race-88a33ece-6b66-11ea-946c-0242ac11000f: Found 0 pods out of 5 Mar 21 11:24:36.254: INFO: Pod name wrapped-volume-race-88a33ece-6b66-11ea-946c-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-88a33ece-6b66-11ea-946c-0242ac11000f in namespace e2e-tests-emptydir-wrapper-8hl22, will wait for the garbage collector to delete the pods Mar 21 11:26:08.343: INFO: Deleting ReplicationController wrapped-volume-race-88a33ece-6b66-11ea-946c-0242ac11000f took: 8.604428ms Mar 21 11:26:08.443: INFO: Terminating ReplicationController wrapped-volume-race-88a33ece-6b66-11ea-946c-0242ac11000f pods took: 100.28375ms STEP: Creating RC which spawns configmap-volume pods Mar 21 11:26:51.515: INFO: Pod name wrapped-volume-race-dc43120f-6b66-11ea-946c-0242ac11000f: Found 0 pods out of 5 Mar 21 11:26:56.523: INFO: Pod name wrapped-volume-race-dc43120f-6b66-11ea-946c-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dc43120f-6b66-11ea-946c-0242ac11000f in namespace e2e-tests-emptydir-wrapper-8hl22, will wait for the garbage collector to delete the pods Mar 21 11:28:50.615: INFO: Deleting ReplicationController wrapped-volume-race-dc43120f-6b66-11ea-946c-0242ac11000f took: 7.612738ms Mar 21 11:28:50.715: INFO: Terminating ReplicationController wrapped-volume-race-dc43120f-6b66-11ea-946c-0242ac11000f pods took: 100.292199ms STEP: Creating RC which spawns configmap-volume pods Mar 21 11:29:31.443: INFO: Pod name wrapped-volume-race-3b9cd6a8-6b67-11ea-946c-0242ac11000f: Found 0 pods out of 5 Mar 21 11:29:36.450: INFO: Pod name wrapped-volume-race-3b9cd6a8-6b67-11ea-946c-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3b9cd6a8-6b67-11ea-946c-0242ac11000f in namespace e2e-tests-emptydir-wrapper-8hl22, will wait for the garbage collector to delete the pods Mar 21 11:32:20.532: INFO: Deleting ReplicationController wrapped-volume-race-3b9cd6a8-6b67-11ea-946c-0242ac11000f took: 6.985236ms Mar 21 11:32:20.633: INFO: Terminating ReplicationController wrapped-volume-race-3b9cd6a8-6b67-11ea-946c-0242ac11000f pods took: 100.342013ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:33:02.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8hl22" for this suite. Mar 21 11:33:10.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:33:11.001: INFO: namespace: e2e-tests-emptydir-wrapper-8hl22, resource: bindings, ignored listing per whitelist Mar 21 11:33:11.040: INFO: namespace e2e-tests-emptydir-wrapper-8hl22 deletion completed in 8.088724314s • [SLOW TEST:520.634 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:33:11.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 21 11:33:11.131: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:33:16.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c976r" for this suite. Mar 21 11:33:22.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:33:22.995: INFO: namespace: e2e-tests-init-container-c976r, resource: bindings, ignored listing per whitelist Mar 21 11:33:23.150: INFO: namespace e2e-tests-init-container-c976r deletion completed in 6.209898111s • [SLOW TEST:12.109 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:33:23.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:33:23.274: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 21 11:33:23.281: INFO: Number of nodes with available pods: 0 Mar 21 11:33:23.281: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 21 11:33:23.357: INFO: Number of nodes with available pods: 0 Mar 21 11:33:23.357: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:24.361: INFO: Number of nodes with available pods: 0 Mar 21 11:33:24.361: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:25.360: INFO: Number of nodes with available pods: 0 Mar 21 11:33:25.360: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:26.372: INFO: Number of nodes with available pods: 1 Mar 21 11:33:26.372: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 21 11:33:26.399: INFO: Number of nodes with available pods: 1 Mar 21 11:33:26.399: INFO: Number of running nodes: 0, number of available pods: 1 Mar 21 11:33:27.403: INFO: Number of nodes with available pods: 0 Mar 21 11:33:27.403: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 21 11:33:27.411: INFO: Number of nodes with available pods: 0 Mar 21 11:33:27.411: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:28.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:28.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:29.420: INFO: Number of nodes with available pods: 0 Mar 21 11:33:29.420: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:30.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:30.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:31.416: INFO: Number of nodes with available pods: 0 Mar 21 11:33:31.416: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:32.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:32.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:33.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:33.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:34.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:34.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:35.416: INFO: Number of nodes with available pods: 0 Mar 21 11:33:35.416: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:36.431: INFO: Number of nodes with available pods: 0 Mar 21 11:33:36.431: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:37.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:37.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:38.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:38.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:39.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:39.416: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:40.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:40.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:41.415: INFO: Number of nodes with available pods: 0 Mar 21 11:33:41.415: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:42.426: INFO: Number of nodes with available pods: 0 Mar 21 11:33:42.426: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:33:43.414: INFO: Number of nodes with available pods: 1 Mar 21 11:33:43.414: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dw8mk, will wait for the garbage collector to delete the pods Mar 21 11:33:43.481: INFO: Deleting DaemonSet.extensions daemon-set took: 11.94014ms Mar 21 11:33:43.581: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.279423ms Mar 21 11:33:47.509: INFO: Number of nodes with available pods: 0 Mar 21 11:33:47.509: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 11:33:47.515: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dw8mk/daemonsets","resourceVersion":"1014778"},"items":null} Mar 21 11:33:47.518: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dw8mk/pods","resourceVersion":"1014778"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:33:47.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-dw8mk" for this suite. Mar 21 11:33:53.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:33:53.628: INFO: namespace: e2e-tests-daemonsets-dw8mk, resource: bindings, ignored listing per whitelist Mar 21 11:33:53.644: INFO: namespace e2e-tests-daemonsets-dw8mk deletion completed in 6.096065957s • [SLOW TEST:30.493 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:33:53.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:34:23.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-mchts" for this suite. Mar 21 11:34:29.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:34:29.277: INFO: namespace: e2e-tests-container-runtime-mchts, resource: bindings, ignored listing per whitelist Mar 21 11:34:29.308: INFO: namespace e2e-tests-container-runtime-mchts deletion completed in 6.097314939s • [SLOW TEST:35.664 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:34:29.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 21 11:34:29.911: INFO: Waiting up to 5m0s for pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz" in namespace "e2e-tests-svcaccounts-jrf8m" to be "success or failure" Mar 21 11:34:29.915: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922958ms Mar 21 11:34:31.920: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008332023s Mar 21 11:34:34.013: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101873663s Mar 21 11:34:36.017: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105979891s STEP: Saw pod success Mar 21 11:34:36.017: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz" satisfied condition "success or failure" Mar 21 11:34:36.020: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz container token-test: STEP: delete the pod Mar 21 11:34:36.068: INFO: Waiting for pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz to disappear Mar 21 11:34:36.083: INFO: Pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-bmdgz no longer exists STEP: Creating a pod to test consume service account root CA Mar 21 11:34:36.087: INFO: Waiting up to 5m0s for pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw" in namespace "e2e-tests-svcaccounts-jrf8m" to be "success or failure" Mar 21 11:34:36.138: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw": Phase="Pending", Reason="", readiness=false. Elapsed: 51.264024ms Mar 21 11:34:38.143: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056029877s Mar 21 11:34:40.152: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065074009s Mar 21 11:34:42.157: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069633752s STEP: Saw pod success Mar 21 11:34:42.157: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw" satisfied condition "success or failure" Mar 21 11:34:42.160: INFO: Trying to get logs from node hunter-worker pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw container root-ca-test: STEP: delete the pod Mar 21 11:34:42.194: INFO: Waiting for pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw to disappear Mar 21 11:34:42.199: INFO: Pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-lnslw no longer exists STEP: Creating a pod to test consume service account namespace Mar 21 11:34:42.205: INFO: Waiting up to 5m0s for pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj" in namespace "e2e-tests-svcaccounts-jrf8m" to be "success or failure" Mar 21 11:34:42.208: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.981565ms Mar 21 11:34:44.213: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007802334s Mar 21 11:34:46.217: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011471088s Mar 21 11:34:48.235: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029732023s STEP: Saw pod success Mar 21 11:34:48.235: INFO: Pod "pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj" satisfied condition "success or failure" Mar 21 11:34:48.237: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj container namespace-test: STEP: delete the pod Mar 21 11:34:48.252: INFO: Waiting for pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj to disappear Mar 21 11:34:48.271: INFO: Pod pod-service-account-ed86b5b3-6b67-11ea-946c-0242ac11000f-g9rqj no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:34:48.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-jrf8m" for this suite. Mar 21 11:34:54.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:34:54.346: INFO: namespace: e2e-tests-svcaccounts-jrf8m, resource: bindings, ignored listing per whitelist Mar 21 11:34:54.355: INFO: namespace e2e-tests-svcaccounts-jrf8m deletion completed in 6.081099414s • [SLOW TEST:25.047 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:34:54.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0321 11:35:34.524132 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 11:35:34.524: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:35:34.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ns4cj" for this suite. Mar 21 11:35:42.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:35:42.638: INFO: namespace: e2e-tests-gc-ns4cj, resource: bindings, ignored listing per whitelist Mar 21 11:35:42.662: INFO: namespace e2e-tests-gc-ns4cj deletion completed in 8.134976935s • [SLOW TEST:48.306 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:35:42.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-rdc7 STEP: Creating a pod to test atomic-volume-subpath Mar 21 11:35:42.979: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rdc7" in namespace "e2e-tests-subpath-zz9mp" to be "success or failure" Mar 21 11:35:42.995: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.998036ms Mar 21 11:35:44.998: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01870382s Mar 21 11:35:47.026: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046676698s Mar 21 11:35:49.029: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 6.050323682s Mar 21 11:35:51.033: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 8.054078303s Mar 21 11:35:53.037: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 10.057574567s Mar 21 11:35:55.040: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 12.060742121s Mar 21 11:35:57.044: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 14.064954301s Mar 21 11:35:59.074: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 16.095429483s Mar 21 11:36:01.079: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 18.099898916s Mar 21 11:36:03.083: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 20.104332508s Mar 21 11:36:05.087: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 22.108332506s Mar 21 11:36:07.091: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Running", Reason="", readiness=false. Elapsed: 24.112272429s Mar 21 11:36:09.110: INFO: Pod "pod-subpath-test-projected-rdc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.131284149s STEP: Saw pod success Mar 21 11:36:09.110: INFO: Pod "pod-subpath-test-projected-rdc7" satisfied condition "success or failure" Mar 21 11:36:09.113: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-rdc7 container test-container-subpath-projected-rdc7: STEP: delete the pod Mar 21 11:36:09.160: INFO: Waiting for pod pod-subpath-test-projected-rdc7 to disappear Mar 21 11:36:09.175: INFO: Pod pod-subpath-test-projected-rdc7 no longer exists STEP: Deleting pod pod-subpath-test-projected-rdc7 Mar 21 11:36:09.175: INFO: Deleting pod "pod-subpath-test-projected-rdc7" in namespace "e2e-tests-subpath-zz9mp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:36:09.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zz9mp" for this suite. Mar 21 11:36:15.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:36:15.233: INFO: namespace: e2e-tests-subpath-zz9mp, resource: bindings, ignored listing per whitelist Mar 21 11:36:15.279: INFO: namespace e2e-tests-subpath-zz9mp deletion completed in 6.089580878s • [SLOW TEST:32.618 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:36:15.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 21 11:36:15.424: INFO: Waiting up to 5m0s for pod "downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-7hmdx" to be "success or failure" Mar 21 11:36:15.432: INFO: Pod "downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749533ms Mar 21 11:36:17.436: INFO: Pod "downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012426103s Mar 21 11:36:19.440: INFO: Pod "downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016730996s STEP: Saw pod success Mar 21 11:36:19.440: INFO: Pod "downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:36:19.444: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 11:36:19.481: INFO: Waiting for pod downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:36:19.499: INFO: Pod downward-api-2c6a9145-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:36:19.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7hmdx" for this suite. Mar 21 11:36:25.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:36:25.701: INFO: namespace: e2e-tests-downward-api-7hmdx, resource: bindings, ignored listing per whitelist Mar 21 11:36:25.711: INFO: namespace e2e-tests-downward-api-7hmdx deletion completed in 6.207967331s • [SLOW TEST:10.431 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:36:25.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 21 11:36:25.831: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:36:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t5wb6" for this suite. Mar 21 11:36:31.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:36:31.960: INFO: namespace: e2e-tests-kubectl-t5wb6, resource: bindings, ignored listing per whitelist Mar 21 11:36:32.012: INFO: namespace e2e-tests-kubectl-t5wb6 deletion completed in 6.099437188s • [SLOW TEST:6.301 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:36:32.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 21 11:36:32.152: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015549,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 21 11:36:32.152: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015550,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 21 11:36:32.152: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015551,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 21 11:36:42.193: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015572,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 11:36:42.193: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015573,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 21 11:36:42.193: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6xlrg,SelfLink:/api/v1/namespaces/e2e-tests-watch-6xlrg/configmaps/e2e-watch-test-label-changed,UID:365bed1d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1015574,Generation:0,CreationTimestamp:2020-03-21 11:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:36:42.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-6xlrg" for this suite. Mar 21 11:36:48.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:36:48.215: INFO: namespace: e2e-tests-watch-6xlrg, resource: bindings, ignored listing per whitelist Mar 21 11:36:48.289: INFO: namespace e2e-tests-watch-6xlrg deletion completed in 6.090661016s • [SLOW TEST:16.276 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:36:48.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:36:48.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-pgk5s" to be "success or failure" Mar 21 11:36:48.422: INFO: Pod "downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.422286ms Mar 21 11:36:50.426: INFO: Pod "downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026778639s Mar 21 11:36:52.431: INFO: Pod "downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031128889s STEP: Saw pod success Mar 21 11:36:52.431: INFO: Pod "downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:36:52.434: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:36:52.476: INFO: Waiting for pod downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:36:52.499: INFO: Pod downwardapi-volume-4011708d-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:36:52.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pgk5s" for this suite. Mar 21 11:36:58.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:36:58.537: INFO: namespace: e2e-tests-downward-api-pgk5s, resource: bindings, ignored listing per whitelist Mar 21 11:36:58.597: INFO: namespace e2e-tests-downward-api-pgk5s deletion completed in 6.094631475s • [SLOW TEST:10.308 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:36:58.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 21 11:36:58.720: INFO: namespace e2e-tests-kubectl-8cvpg Mar 21 11:36:58.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8cvpg' Mar 21 11:37:01.158: INFO: stderr: "" Mar 21 11:37:01.158: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 21 11:37:02.162: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:37:02.162: INFO: Found 0 / 1 Mar 21 11:37:03.163: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:37:03.163: INFO: Found 0 / 1 Mar 21 11:37:04.234: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:37:04.234: INFO: Found 1 / 1 Mar 21 11:37:04.234: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 21 11:37:04.238: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:37:04.238: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 11:37:04.238: INFO: wait on redis-master startup in e2e-tests-kubectl-8cvpg Mar 21 11:37:04.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5rxgb redis-master --namespace=e2e-tests-kubectl-8cvpg' Mar 21 11:37:04.348: INFO: stderr: "" Mar 21 11:37:04.348: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Mar 11:37:03.985 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Mar 11:37:03.985 # Server started, Redis version 3.2.12\n1:M 21 Mar 11:37:03.986 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Mar 11:37:03.986 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 21 11:37:04.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8cvpg' Mar 21 11:37:04.496: INFO: stderr: "" Mar 21 11:37:04.496: INFO: stdout: "service/rm2 exposed\n" Mar 21 11:37:04.498: INFO: Service rm2 in namespace e2e-tests-kubectl-8cvpg found. STEP: exposing service Mar 21 11:37:06.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8cvpg' Mar 21 11:37:06.664: INFO: stderr: "" Mar 21 11:37:06.664: INFO: stdout: "service/rm3 exposed\n" Mar 21 11:37:06.674: INFO: Service rm3 in namespace e2e-tests-kubectl-8cvpg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:37:08.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8cvpg" for this suite. Mar 21 11:37:30.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:37:30.750: INFO: namespace: e2e-tests-kubectl-8cvpg, resource: bindings, ignored listing per whitelist Mar 21 11:37:30.795: INFO: namespace e2e-tests-kubectl-8cvpg deletion completed in 22.109347197s • [SLOW TEST:32.197 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:37:30.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-596b9615-6b68-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:37:30.930: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-jnb8j" to be "success or failure" Mar 21 11:37:30.932: INFO: Pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394684ms Mar 21 11:37:32.936: INFO: Pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006448034s Mar 21 11:37:34.941: INFO: Pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011216018s Mar 21 11:37:36.945: INFO: Pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015388953s STEP: Saw pod success Mar 21 11:37:36.945: INFO: Pod "pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:37:36.949: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 21 11:37:36.969: INFO: Waiting for pod pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:37:36.974: INFO: Pod pod-projected-secrets-596c19cf-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:37:36.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jnb8j" for this suite. Mar 21 11:37:43.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:37:43.043: INFO: namespace: e2e-tests-projected-jnb8j, resource: bindings, ignored listing per whitelist Mar 21 11:37:43.090: INFO: namespace e2e-tests-projected-jnb8j deletion completed in 6.113765301s • [SLOW TEST:12.295 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:37:43.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:37:43.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7snf4" for this suite. Mar 21 11:38:05.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:38:05.313: INFO: namespace: e2e-tests-pods-7snf4, resource: bindings, ignored listing per whitelist Mar 21 11:38:05.377: INFO: namespace e2e-tests-pods-7snf4 deletion completed in 22.156178204s • [SLOW TEST:22.287 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:38:05.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 21 11:38:05.515: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 11:38:05.530: INFO: Waiting for terminating namespaces to be deleted... Mar 21 11:38:05.548: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 21 11:38:05.553: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.553: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 11:38:05.553: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.553: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 11:38:05.553: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.553: INFO: Container coredns ready: true, restart count 0 Mar 21 11:38:05.553: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 21 11:38:05.558: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.559: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 11:38:05.559: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.559: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 11:38:05.559: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 11:38:05.559: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-70775c7b-6b68-11ea-946c-0242ac11000f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-70775c7b-6b68-11ea-946c-0242ac11000f off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-70775c7b-6b68-11ea-946c-0242ac11000f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:38:13.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-knkmj" for this suite. Mar 21 11:38:41.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:38:41.720: INFO: namespace: e2e-tests-sched-pred-knkmj, resource: bindings, ignored listing per whitelist Mar 21 11:38:41.789: INFO: namespace e2e-tests-sched-pred-knkmj deletion completed in 28.095076641s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:36.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:38:41.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 21 11:38:41.881: INFO: Waiting up to 5m0s for pod "pod-83b45738-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-v2cmt" to be "success or failure" Mar 21 11:38:41.891: INFO: Pod "pod-83b45738-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.356324ms Mar 21 11:38:43.895: INFO: Pod "pod-83b45738-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014073295s Mar 21 11:38:45.899: INFO: Pod "pod-83b45738-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018474844s STEP: Saw pod success Mar 21 11:38:45.899: INFO: Pod "pod-83b45738-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:38:45.903: INFO: Trying to get logs from node hunter-worker2 pod pod-83b45738-6b68-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 11:38:45.934: INFO: Waiting for pod pod-83b45738-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:38:45.956: INFO: Pod pod-83b45738-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:38:45.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v2cmt" for this suite. Mar 21 11:38:51.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:38:52.043: INFO: namespace: e2e-tests-emptydir-v2cmt, resource: bindings, ignored listing per whitelist Mar 21 11:38:52.050: INFO: namespace e2e-tests-emptydir-v2cmt deletion completed in 6.089610784s • [SLOW TEST:10.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:38:52.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-ps55b/secret-test-89da3cb0-6b68-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:38:52.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-ps55b" to be "success or failure" Mar 21 11:38:52.219: INFO: Pod "pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419389ms Mar 21 11:38:54.222: INFO: Pod "pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009681123s Mar 21 11:38:56.250: INFO: Pod "pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037763585s STEP: Saw pod success Mar 21 11:38:56.250: INFO: Pod "pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:38:56.253: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f container env-test: STEP: delete the pod Mar 21 11:38:56.302: INFO: Waiting for pod pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:38:56.321: INFO: Pod pod-configmaps-89dc2433-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:38:56.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ps55b" for this suite. Mar 21 11:39:02.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:39:02.420: INFO: namespace: e2e-tests-secrets-ps55b, resource: bindings, ignored listing per whitelist Mar 21 11:39:02.480: INFO: namespace e2e-tests-secrets-ps55b deletion completed in 6.150350282s • [SLOW TEST:10.430 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:39:02.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-c9np7/configmap-test-90095342-6b68-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:39:02.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-c9np7" to be "success or failure" Mar 21 11:39:02.585: INFO: Pod "pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099824ms Mar 21 11:39:04.588: INFO: Pod "pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00747745s Mar 21 11:39:06.592: INFO: Pod "pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011476142s STEP: Saw pod success Mar 21 11:39:06.592: INFO: Pod "pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:39:06.595: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f container env-test: STEP: delete the pod Mar 21 11:39:06.621: INFO: Waiting for pod pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:39:06.645: INFO: Pod pod-configmaps-900b6123-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:39:06.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-c9np7" for this suite. Mar 21 11:39:12.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:39:12.735: INFO: namespace: e2e-tests-configmap-c9np7, resource: bindings, ignored listing per whitelist Mar 21 11:39:12.744: INFO: namespace e2e-tests-configmap-c9np7 deletion completed in 6.095158853s • [SLOW TEST:10.264 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:39:12.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-x8252 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8252 to expose endpoints map[] Mar 21 11:39:12.927: INFO: Get endpoints failed (11.090122ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 21 11:39:13.931: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8252 exposes endpoints map[] (1.015214916s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-x8252 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8252 to expose endpoints map[pod1:[80]] Mar 21 11:39:16.972: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8252 exposes endpoints map[pod1:[80]] (3.033201486s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-x8252 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8252 to expose endpoints map[pod1:[80] pod2:[80]] Mar 21 11:39:20.032: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8252 exposes endpoints map[pod1:[80] pod2:[80]] (3.057159157s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-x8252 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8252 to expose endpoints map[pod2:[80]] Mar 21 11:39:21.055: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8252 exposes endpoints map[pod2:[80]] (1.017711818s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-x8252 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8252 to expose endpoints map[] Mar 21 11:39:22.070: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8252 exposes endpoints map[] (1.010035516s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:39:22.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-x8252" for this suite. Mar 21 11:39:44.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:39:44.199: INFO: namespace: e2e-tests-services-x8252, resource: bindings, ignored listing per whitelist Mar 21 11:39:44.206: INFO: namespace e2e-tests-services-x8252 deletion completed in 22.097338673s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.461 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:39:44.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:39:44.291: INFO: Creating deployment "nginx-deployment" Mar 21 11:39:44.335: INFO: Waiting for observed generation 1 Mar 21 11:39:46.363: INFO: Waiting for all required pods to come up Mar 21 11:39:46.367: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 21 11:39:54.383: INFO: Waiting for deployment "nginx-deployment" to complete Mar 21 11:39:54.390: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 21 11:39:54.396: INFO: Updating deployment nginx-deployment Mar 21 11:39:54.396: INFO: Waiting for observed generation 2 Mar 21 11:39:56.405: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 21 11:39:56.407: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 21 11:39:56.426: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 21 11:39:56.461: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 21 11:39:56.461: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 21 11:39:56.463: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 21 11:39:56.467: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 21 11:39:56.467: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 21 11:39:56.471: INFO: Updating deployment nginx-deployment Mar 21 11:39:56.471: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 21 11:39:56.480: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 21 11:39:56.499: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 21 11:39:56.643: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xq2s8/deployments/nginx-deployment,UID:a8ea6a09-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016429,Generation:3,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-21 11:39:54 +0000 UTC 2020-03-21 11:39:44 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-03-21 11:39:56 +0000 UTC 2020-03-21 11:39:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 21 11:39:56.803: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xq2s8/replicasets/nginx-deployment-5c98f8fb5,UID:aef14da6-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016465,Generation:3,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a8ea6a09-6b68-11ea-99e8-0242ac110002 0xc0022237a7 0xc0022237a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 11:39:56.803: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 21 11:39:56.803: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xq2s8/replicasets/nginx-deployment-85ddf47c5d,UID:a8f4d388-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016454,Generation:3,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a8ea6a09-6b68-11ea-99e8-0242ac110002 0xc002223867 0xc002223868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 21 11:39:56.848: INFO: Pod "nginx-deployment-5c98f8fb5-4tmmn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4tmmn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-4tmmn,UID:b03138d2-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016443,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc26f7 0xc001fc26f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc2780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc27a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.848: INFO: Pod "nginx-deployment-5c98f8fb5-79k5w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-79k5w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-79k5w,UID:af1377b8-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016404,Generation:0,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc2817 0xc001fc2818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc28e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc2900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-21 11:39:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.849: INFO: Pod "nginx-deployment-5c98f8fb5-7wsm8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7wsm8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-7wsm8,UID:b034621d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016463,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc2a00 0xc001fc2a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc2ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc2af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.849: INFO: Pod "nginx-deployment-5c98f8fb5-csppw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-csppw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-csppw,UID:b0344969-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016455,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc2ba7 0xc001fc2ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc2ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc2cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.849: INFO: Pod "nginx-deployment-5c98f8fb5-gdpnd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gdpnd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-gdpnd,UID:b02e6f83-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016432,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc2d97 0xc001fc2d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc2f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc2f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.849: INFO: Pod "nginx-deployment-5c98f8fb5-htggx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-htggx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-htggx,UID:b03456b7-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016460,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc34a7 0xc001fc34a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc3520} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc3540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.849: INFO: Pod "nginx-deployment-5c98f8fb5-kz9b2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kz9b2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-kz9b2,UID:b0344e1a-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016456,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc35b7 0xc001fc35b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc3630} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc3650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.850: INFO: Pod "nginx-deployment-5c98f8fb5-lj58b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lj58b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-lj58b,UID:aef88e02-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016386,Generation:0,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc36c7 0xc001fc36c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc3860} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc3920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-21 11:39:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.850: INFO: Pod "nginx-deployment-5c98f8fb5-lrnkk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lrnkk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-lrnkk,UID:b0312144-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016436,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc3a60 0xc001fc3a61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc3bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc3d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.850: INFO: Pod "nginx-deployment-5c98f8fb5-mf8bp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mf8bp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-mf8bp,UID:af16b695-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016406,Generation:0,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001fc3dd7 0xc001fc3dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fc3e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fc3e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-21 11:39:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.850: INFO: Pod "nginx-deployment-5c98f8fb5-n8znq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n8znq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-n8znq,UID:aef662f5-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016385,Generation:0,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001f28430 0xc001f28431}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f284b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f284d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-21 11:39:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.850: INFO: Pod "nginx-deployment-5c98f8fb5-wrlln" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wrlln,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-wrlln,UID:aef89961-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016393,Generation:0,CreationTimestamp:2020-03-21 11:39:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001f28fd0 0xc001f28fd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f29050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f29070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-21 11:39:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.851: INFO: Pod "nginx-deployment-5c98f8fb5-xrktg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xrktg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-5c98f8fb5-xrktg,UID:b0371d3f-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016466,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 aef14da6-6b68-11ea-99e8-0242ac110002 0xc001f29300 0xc001f29301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f294d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f294f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.851: INFO: Pod "nginx-deployment-85ddf47c5d-2k255" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2k255,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-2k255,UID:b02d5fdf-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016467,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f296a7 0xc001f296a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f29720} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f29780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-21 11:39:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.851: INFO: Pod "nginx-deployment-85ddf47c5d-5998b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5998b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-5998b,UID:b0314b8e-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016441,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f299b7 0xc001f299b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f29b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f29b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.851: INFO: Pod "nginx-deployment-85ddf47c5d-679kw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-679kw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-679kw,UID:b02e44ef-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016476,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f29c07 0xc001f29c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f29d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f29d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-21 11:39:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.851: INFO: Pod "nginx-deployment-85ddf47c5d-6r7p5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6r7p5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-6r7p5,UID:b0313381-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016440,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f29ed7 0xc001f29ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f29fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f29fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.852: INFO: Pod "nginx-deployment-85ddf47c5d-7jz2t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7jz2t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-7jz2t,UID:b02e4649-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016431,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f20a67 0xc001f20a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f20ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f20b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.852: INFO: Pod "nginx-deployment-85ddf47c5d-8p5sd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8p5sd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-8p5sd,UID:b03148c1-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016445,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f20fd7 0xc001f20fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.852: INFO: Pod "nginx-deployment-85ddf47c5d-b8xqc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b8xqc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-b8xqc,UID:b034a217-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016461,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f210e7 0xc001f210e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f214a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.852: INFO: Pod "nginx-deployment-85ddf47c5d-ccxmm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ccxmm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-ccxmm,UID:a8fd6365-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016317,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21657 0xc001f21658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21810} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.107,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a699d6b59f7e707138b84c33f2690dc327f746568b3c5a2f7c0bd797c8401b1f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.852: INFO: Pod "nginx-deployment-85ddf47c5d-dvfkc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dvfkc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-dvfkc,UID:a8fed4ce-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016331,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21937 0xc001f21938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.226,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c4101d8fad674fe7af90b4b7df8e45c27b8d731248f26fcff87a1da3389688aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.853: INFO: Pod "nginx-deployment-85ddf47c5d-fbwlg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fbwlg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-fbwlg,UID:b034a9c9-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016459,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21b07 0xc001f21b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.853: INFO: Pod "nginx-deployment-85ddf47c5d-gdgxn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gdgxn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-gdgxn,UID:a8feda63-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016339,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21ca7 0xc001f21ca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.109,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://41386485d3cd0a3d328b2e422d6c73f078e1d708e487821c26ba35c528b567d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.853: INFO: Pod "nginx-deployment-85ddf47c5d-gkthz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gkthz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-gkthz,UID:a8fd53da-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016312,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21e07 0xc001f21e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f21ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.225,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://975e1ec47f1c004812731bc261122212f5676b485cbf063f640f84f356fdf55c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.853: INFO: Pod "nginx-deployment-85ddf47c5d-jqm4x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jqm4x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-jqm4x,UID:a90397c4-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016347,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001f21f67 0xc001f21f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f21fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5cbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.227,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a07b43dabde48272edd98725fb0b6889f77dc3d0d4f49b215d5741180a1c5f7e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.853: INFO: Pod "nginx-deployment-85ddf47c5d-n6p8p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6p8p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-n6p8p,UID:a8fedd15-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016321,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5cc77 0xc001c5cc78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5cda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5d100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.108,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://75179d9d28e30eea4177aaeff908e2bbdd499a6a8a97ab961419d7edfca01d57}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-nd2kw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nd2kw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-nd2kw,UID:a903a82d-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016341,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5d307 0xc001c5d308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5d380} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5d450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.110,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f861af980f25b4368837dbb6e13505eea17d5c38280a84688a285d676671d567}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-nr8g6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nr8g6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-nr8g6,UID:b034aedf-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016458,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5d597 0xc001c5d598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5d670} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5d6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-ntdwt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntdwt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-ntdwt,UID:b031399c-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016471,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5d747 0xc001c5d748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5d830} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5d850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-21 11:39:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-pm687" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pm687,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-pm687,UID:b034a5a4-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016457,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5d937 0xc001c5d938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5d9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5d9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-pmpxs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pmpxs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-pmpxs,UID:b0349101-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016462,Generation:0,CreationTimestamp:2020-03-21 11:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5db17 0xc001c5db18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c5db90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c5dbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 21 11:39:56.854: INFO: Pod "nginx-deployment-85ddf47c5d-zkrgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zkrgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xq2s8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xq2s8/pods/nginx-deployment-85ddf47c5d-zkrgs,UID:a8fcb242-6b68-11ea-99e8-0242ac110002,ResourceVersion:1016297,Generation:0,CreationTimestamp:2020-03-21 11:39:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a8f4d388-6b68-11ea-99e8-0242ac110002 0xc001c5dda7 0xc001c5dda8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qqnfd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqnfd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqnfd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001adc000} {node.kubernetes.io/unreachable Exists NoExecute 0xc001adc020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:39:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.106,StartTime:2020-03-21 11:39:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-21 11:39:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d2d73a516edafb879104c198e12cf0bcc5a0e000610f0437b2da4f0bb79606df}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:39:56.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xq2s8" for this suite. Mar 21 11:40:13.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:40:13.146: INFO: namespace: e2e-tests-deployment-xq2s8, resource: bindings, ignored listing per whitelist Mar 21 11:40:13.262: INFO: namespace e2e-tests-deployment-xq2s8 deletion completed in 16.362749102s • [SLOW TEST:29.056 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:40:13.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ba629045-6b68-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:40:13.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-8ft5b" to be "success or failure" Mar 21 11:40:13.719: INFO: Pod "pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.087943ms Mar 21 11:40:15.722: INFO: Pod "pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052567751s Mar 21 11:40:17.726: INFO: Pod "pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056435092s STEP: Saw pod success Mar 21 11:40:17.726: INFO: Pod "pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:40:17.729: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 11:40:17.798: INFO: Waiting for pod pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f to disappear Mar 21 11:40:17.803: INFO: Pod pod-configmaps-ba6bc718-6b68-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:40:17.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8ft5b" for this suite. Mar 21 11:40:23.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:40:23.876: INFO: namespace: e2e-tests-configmap-8ft5b, resource: bindings, ignored listing per whitelist Mar 21 11:40:23.919: INFO: namespace e2e-tests-configmap-8ft5b deletion completed in 6.112174782s • [SLOW TEST:10.656 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:40:23.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-s5nl2 Mar 21 11:40:28.075: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-s5nl2 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 11:40:28.079: INFO: Initial restart count of pod liveness-http is 0 Mar 21 11:40:44.114: INFO: Restart count of pod e2e-tests-container-probe-s5nl2/liveness-http is now 1 (16.035466928s elapsed) Mar 21 11:41:04.154: INFO: Restart count of pod e2e-tests-container-probe-s5nl2/liveness-http is now 2 (36.075044644s elapsed) Mar 21 11:41:22.190: INFO: Restart count of pod e2e-tests-container-probe-s5nl2/liveness-http is now 3 (54.110968457s elapsed) Mar 21 11:41:42.250: INFO: Restart count of pod e2e-tests-container-probe-s5nl2/liveness-http is now 4 (1m14.171851482s elapsed) Mar 21 11:42:46.385: INFO: Restart count of pod e2e-tests-container-probe-s5nl2/liveness-http is now 5 (2m18.306873542s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:42:46.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s5nl2" for this suite. Mar 21 11:42:52.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:42:52.513: INFO: namespace: e2e-tests-container-probe-s5nl2, resource: bindings, ignored listing per whitelist Mar 21 11:42:52.522: INFO: namespace e2e-tests-container-probe-s5nl2 deletion completed in 6.104269041s • [SLOW TEST:148.602 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:42:52.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 21 11:42:56.632: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1928d612-6b69-11ea-946c-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-cqdqg,SelfLink:/api/v1/namespaces/e2e-tests-events-cqdqg/pods/send-events-1928d612-6b69-11ea-946c-0242ac11000f,UID:192a2b12-6b69-11ea-99e8-0242ac110002,ResourceVersion:1017101,Generation:0,CreationTimestamp:2020-03-21 11:42:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 606625859,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-78gnr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-78gnr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-78gnr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0013ee500} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013ee520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:42:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:42:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:42:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:42:52 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.244,StartTime:2020-03-21 11:42:52 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-21 11:42:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://c0b163680dc956742d9fe40dc388b00b6fe9b49c1d0685d43c0936cfd93e2fce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 21 11:42:58.638: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 21 11:43:00.643: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:43:00.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-cqdqg" for this suite. Mar 21 11:43:46.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:43:46.719: INFO: namespace: e2e-tests-events-cqdqg, resource: bindings, ignored listing per whitelist Mar 21 11:43:46.772: INFO: namespace e2e-tests-events-cqdqg deletion completed in 46.094227384s • [SLOW TEST:54.250 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:43:46.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:43:46.949: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 21 11:43:46.956: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9qmvp/daemonsets","resourceVersion":"1017218"},"items":null} Mar 21 11:43:46.958: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9qmvp/pods","resourceVersion":"1017218"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:43:46.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9qmvp" for this suite. Mar 21 11:43:52.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:43:53.049: INFO: namespace: e2e-tests-daemonsets-9qmvp, resource: bindings, ignored listing per whitelist Mar 21 11:43:53.064: INFO: namespace e2e-tests-daemonsets-9qmvp deletion completed in 6.093330965s S [SKIPPING] [6.291 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:43:46.949: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:43:53.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:43:53.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-b6ztf" to be "success or failure" Mar 21 11:43:53.161: INFO: Pod "downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.046594ms Mar 21 11:43:55.165: INFO: Pod "downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018920093s Mar 21 11:43:57.168: INFO: Pod "downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022295756s STEP: Saw pod success Mar 21 11:43:57.169: INFO: Pod "downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:43:57.171: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:43:57.206: INFO: Waiting for pod downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f to disappear Mar 21 11:43:57.215: INFO: Pod downwardapi-volume-3d3b953e-6b69-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:43:57.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b6ztf" for this suite. Mar 21 11:44:03.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:44:03.255: INFO: namespace: e2e-tests-projected-b6ztf, resource: bindings, ignored listing per whitelist Mar 21 11:44:03.299: INFO: namespace e2e-tests-projected-b6ztf deletion completed in 6.081670967s • [SLOW TEST:10.235 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:44:03.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:44:03.409: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 21 11:44:03.419: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 21 11:44:08.423: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 11:44:08.423: INFO: Creating deployment "test-rolling-update-deployment" Mar 21 11:44:08.428: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 21 11:44:08.460: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 21 11:44:10.468: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 21 11:44:10.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720387848, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720387848, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720387848, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720387848, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 11:44:12.500: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 21 11:44:12.527: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-m4xgn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m4xgn/deployments/test-rolling-update-deployment,UID:4659dcf0-6b69-11ea-99e8-0242ac110002,ResourceVersion:1017341,Generation:1,CreationTimestamp:2020-03-21 11:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-21 11:44:08 +0000 UTC 2020-03-21 11:44:08 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-21 11:44:11 +0000 UTC 2020-03-21 11:44:08 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 21 11:44:12.530: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-m4xgn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m4xgn/replicasets/test-rolling-update-deployment-75db98fb4c,UID:46600a99-6b69-11ea-99e8-0242ac110002,ResourceVersion:1017332,Generation:1,CreationTimestamp:2020-03-21 11:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4659dcf0-6b69-11ea-99e8-0242ac110002 0xc002443107 0xc002443108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 21 11:44:12.530: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 21 11:44:12.531: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-m4xgn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m4xgn/replicasets/test-rolling-update-controller,UID:435caf61-6b69-11ea-99e8-0242ac110002,ResourceVersion:1017340,Generation:2,CreationTimestamp:2020-03-21 11:44:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4659dcf0-6b69-11ea-99e8-0242ac110002 0xc002442fe7 0xc002442fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 11:44:12.534: INFO: Pod "test-rolling-update-deployment-75db98fb4c-ggxjs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-ggxjs,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-m4xgn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m4xgn/pods/test-rolling-update-deployment-75db98fb4c-ggxjs,UID:46625b57-6b69-11ea-99e8-0242ac110002,ResourceVersion:1017331,Generation:0,CreationTimestamp:2020-03-21 11:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 46600a99-6b69-11ea-99e8-0242ac110002 0xc00107ac57 0xc00107ac58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fc2r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fc2r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9fc2r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00107acd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00107acf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:44:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:44:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:44:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 11:44:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.125,StartTime:2020-03-21 11:44:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-21 11:44:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://93ca59b797194c89597f116f1c1fa872353fbb7b2b920fbd30c195bc53c26d8e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:44:12.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-m4xgn" for this suite. Mar 21 11:44:18.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:44:18.604: INFO: namespace: e2e-tests-deployment-m4xgn, resource: bindings, ignored listing per whitelist Mar 21 11:44:18.629: INFO: namespace e2e-tests-deployment-m4xgn deletion completed in 6.091668656s • [SLOW TEST:15.329 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:44:18.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2w6gp in namespace e2e-tests-proxy-qx4mv I0321 11:44:18.779001 6 runners.go:184] Created replication controller with name: proxy-service-2w6gp, namespace: e2e-tests-proxy-qx4mv, replica count: 1 I0321 11:44:19.829430 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 11:44:20.829651 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 11:44:21.829897 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0321 11:44:22.830145 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0321 11:44:23.830344 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0321 11:44:24.830561 6 runners.go:184] proxy-service-2w6gp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 11:44:24.834: INFO: setup took 6.09846859s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 21 11:44:24.840: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qx4mv/pods/proxy-service-2w6gp-f9fcl:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 21 11:44:38.037: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:38.069: INFO: Number of nodes with available pods: 0 Mar 21 11:44:38.069: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:44:39.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:39.076: INFO: Number of nodes with available pods: 0 Mar 21 11:44:39.076: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:44:40.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:40.076: INFO: Number of nodes with available pods: 0 Mar 21 11:44:40.076: INFO: Node hunter-worker is running more than one daemon pod Mar 21 11:44:41.074: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:41.077: INFO: Number of nodes with available pods: 1 Mar 21 11:44:41.077: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:42.074: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:42.078: INFO: Number of nodes with available pods: 2 Mar 21 11:44:42.078: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 21 11:44:42.097: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:42.099: INFO: Number of nodes with available pods: 1 Mar 21 11:44:42.099: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:43.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:43.108: INFO: Number of nodes with available pods: 1 Mar 21 11:44:43.108: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:44.357: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:44.378: INFO: Number of nodes with available pods: 1 Mar 21 11:44:44.378: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:45.105: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:45.108: INFO: Number of nodes with available pods: 1 Mar 21 11:44:45.108: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:46.105: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:46.108: INFO: Number of nodes with available pods: 1 Mar 21 11:44:46.108: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:47.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:47.106: INFO: Number of nodes with available pods: 1 Mar 21 11:44:47.106: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:48.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:48.107: INFO: Number of nodes with available pods: 1 Mar 21 11:44:48.107: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:49.106: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:49.109: INFO: Number of nodes with available pods: 1 Mar 21 11:44:49.109: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 11:44:50.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 11:44:50.108: INFO: Number of nodes with available pods: 2 Mar 21 11:44:50.108: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-fg7mc, will wait for the garbage collector to delete the pods Mar 21 11:44:50.170: INFO: Deleting DaemonSet.extensions daemon-set took: 6.255374ms Mar 21 11:44:50.271: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.284738ms Mar 21 11:45:01.774: INFO: Number of nodes with available pods: 0 Mar 21 11:45:01.774: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 11:45:01.776: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-fg7mc/daemonsets","resourceVersion":"1017564"},"items":null} Mar 21 11:45:01.779: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-fg7mc/pods","resourceVersion":"1017564"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:45:01.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-fg7mc" for this suite. Mar 21 11:45:07.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:45:07.826: INFO: namespace: e2e-tests-daemonsets-fg7mc, resource: bindings, ignored listing per whitelist Mar 21 11:45:07.935: INFO: namespace e2e-tests-daemonsets-fg7mc deletion completed in 6.143076726s • [SLOW TEST:30.079 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:45:07.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 21 11:45:08.031: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:45:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-8mrp6" for this suite. Mar 21 11:45:37.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:45:38.013: INFO: namespace: e2e-tests-init-container-8mrp6, resource: bindings, ignored listing per whitelist Mar 21 11:45:38.039: INFO: namespace e2e-tests-init-container-8mrp6 deletion completed in 22.108435242s • [SLOW TEST:30.103 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:45:38.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-v8t4c.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-v8t4c.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v8t4c.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-v8t4c.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-v8t4c.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-v8t4c.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 11:45:44.873: INFO: DNS probes using e2e-tests-dns-v8t4c/dns-test-7bd004e6-6b69-11ea-946c-0242ac11000f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:45:44.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-v8t4c" for this suite. Mar 21 11:45:50.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:45:51.019: INFO: namespace: e2e-tests-dns-v8t4c, resource: bindings, ignored listing per whitelist Mar 21 11:45:51.021: INFO: namespace e2e-tests-dns-v8t4c deletion completed in 6.094461366s • [SLOW TEST:12.982 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:45:51.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 21 11:45:51.112: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 21 11:45:51.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:51.394: INFO: stderr: "" Mar 21 11:45:51.394: INFO: stdout: "service/redis-slave created\n" Mar 21 11:45:51.394: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 21 11:45:51.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:51.656: INFO: stderr: "" Mar 21 11:45:51.656: INFO: stdout: "service/redis-master created\n" Mar 21 11:45:51.656: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 21 11:45:51.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:51.947: INFO: stderr: "" Mar 21 11:45:51.947: INFO: stdout: "service/frontend created\n" Mar 21 11:45:51.947: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 21 11:45:51.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:52.205: INFO: stderr: "" Mar 21 11:45:52.205: INFO: stdout: "deployment.extensions/frontend created\n" Mar 21 11:45:52.205: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 21 11:45:52.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:52.538: INFO: stderr: "" Mar 21 11:45:52.538: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 21 11:45:52.538: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 21 11:45:52.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:45:52.801: INFO: stderr: "" Mar 21 11:45:52.801: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 21 11:45:52.801: INFO: Waiting for all frontend pods to be Running. Mar 21 11:45:57.851: INFO: Waiting for frontend to serve content. Mar 21 11:45:59.184: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Mar 21 11:46:04.203: INFO: Trying to add a new entry to the guestbook. Mar 21 11:46:04.219: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 21 11:46:04.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:04.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:04.427: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 21 11:46:04.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:04.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:04.571: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 21 11:46:04.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:04.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:04.702: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 11:46:04.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:04.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:04.810: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 11:46:04.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:05.038: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:05.038: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 21 11:46:05.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjbch' Mar 21 11:46:05.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 11:46:05.174: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:46:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qjbch" for this suite. Mar 21 11:46:43.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:46:43.430: INFO: namespace: e2e-tests-kubectl-qjbch, resource: bindings, ignored listing per whitelist Mar 21 11:46:43.511: INFO: namespace e2e-tests-kubectl-qjbch deletion completed in 38.327072734s • [SLOW TEST:52.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:46:43.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fxzpf Mar 21 11:46:47.644: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fxzpf STEP: checking the pod's current state and verifying that restartCount is present Mar 21 11:46:47.647: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:50:48.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fxzpf" for this suite. Mar 21 11:50:54.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:50:54.280: INFO: namespace: e2e-tests-container-probe-fxzpf, resource: bindings, ignored listing per whitelist Mar 21 11:50:54.321: INFO: namespace e2e-tests-container-probe-fxzpf deletion completed in 6.093804094s • [SLOW TEST:250.810 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:50:54.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 21 11:50:54.465: INFO: Waiting up to 5m0s for pod "pod-38598133-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-9j4hl" to be "success or failure" Mar 21 11:50:54.472: INFO: Pod "pod-38598133-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.035745ms Mar 21 11:50:56.477: INFO: Pod "pod-38598133-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011158521s Mar 21 11:50:58.481: INFO: Pod "pod-38598133-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015580108s STEP: Saw pod success Mar 21 11:50:58.481: INFO: Pod "pod-38598133-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:50:58.484: INFO: Trying to get logs from node hunter-worker pod pod-38598133-6b6a-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 11:50:58.506: INFO: Waiting for pod pod-38598133-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:50:58.510: INFO: Pod pod-38598133-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:50:58.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9j4hl" for this suite. Mar 21 11:51:04.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:51:04.610: INFO: namespace: e2e-tests-emptydir-9j4hl, resource: bindings, ignored listing per whitelist Mar 21 11:51:04.646: INFO: namespace e2e-tests-emptydir-9j4hl deletion completed in 6.131380533s • [SLOW TEST:10.325 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:51:04.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 21 11:51:12.782: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 11:51:12.811: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 11:51:14.812: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 11:51:14.816: INFO: Pod pod-with-prestop-http-hook still exists Mar 21 11:51:16.812: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 21 11:51:16.816: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:51:16.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bjvx2" for this suite. Mar 21 11:51:38.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:51:38.896: INFO: namespace: e2e-tests-container-lifecycle-hook-bjvx2, resource: bindings, ignored listing per whitelist Mar 21 11:51:38.918: INFO: namespace e2e-tests-container-lifecycle-hook-bjvx2 deletion completed in 22.090825818s • [SLOW TEST:34.272 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:51:38.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 21 11:51:39.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 21 11:51:41.167: INFO: stderr: "" Mar 21 11:51:41.167: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:51:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rf2xz" for this suite. Mar 21 11:51:47.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:51:47.235: INFO: namespace: e2e-tests-kubectl-rf2xz, resource: bindings, ignored listing per whitelist Mar 21 11:51:47.337: INFO: namespace e2e-tests-kubectl-rf2xz deletion completed in 6.166196337s • [SLOW TEST:8.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:51:47.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-57ecb7cb-6b6a-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:51:47.455: INFO: Waiting up to 5m0s for pod "pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-4jsh8" to be "success or failure" Mar 21 11:51:47.458: INFO: Pod "pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.007679ms Mar 21 11:51:49.465: INFO: Pod "pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00929697s Mar 21 11:51:51.468: INFO: Pod "pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013147306s STEP: Saw pod success Mar 21 11:51:51.468: INFO: Pod "pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:51:51.471: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 11:51:51.487: INFO: Waiting for pod pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:51:51.507: INFO: Pod pod-configmaps-57ef8f3b-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:51:51.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4jsh8" for this suite. Mar 21 11:51:57.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:51:57.547: INFO: namespace: e2e-tests-configmap-4jsh8, resource: bindings, ignored listing per whitelist Mar 21 11:51:57.609: INFO: namespace e2e-tests-configmap-4jsh8 deletion completed in 6.098590237s • [SLOW TEST:10.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:51:57.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pbcwt [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pbcwt STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pbcwt Mar 21 11:51:57.743: INFO: Found 0 stateful pods, waiting for 1 Mar 21 11:52:07.752: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 21 11:52:07.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:52:07.976: INFO: stderr: "I0321 11:52:07.882405 1919 log.go:172] (0xc0006e84d0) (0xc00030d2c0) Create stream\nI0321 11:52:07.882466 1919 log.go:172] (0xc0006e84d0) (0xc00030d2c0) Stream added, broadcasting: 1\nI0321 11:52:07.884912 1919 log.go:172] (0xc0006e84d0) Reply frame received for 1\nI0321 11:52:07.884973 1919 log.go:172] (0xc0006e84d0) (0xc00037a000) Create stream\nI0321 11:52:07.884989 1919 log.go:172] (0xc0006e84d0) (0xc00037a000) Stream added, broadcasting: 3\nI0321 11:52:07.886208 1919 log.go:172] (0xc0006e84d0) Reply frame received for 3\nI0321 11:52:07.886243 1919 log.go:172] (0xc0006e84d0) (0xc00037a0a0) Create stream\nI0321 11:52:07.886251 1919 log.go:172] (0xc0006e84d0) (0xc00037a0a0) Stream added, broadcasting: 5\nI0321 11:52:07.887148 1919 log.go:172] (0xc0006e84d0) Reply frame received for 5\nI0321 11:52:07.970555 1919 log.go:172] (0xc0006e84d0) Data frame received for 5\nI0321 11:52:07.970614 1919 log.go:172] (0xc00037a0a0) (5) Data frame handling\nI0321 11:52:07.970698 1919 log.go:172] (0xc0006e84d0) Data frame received for 3\nI0321 11:52:07.970738 1919 log.go:172] (0xc00037a000) (3) Data frame handling\nI0321 11:52:07.970761 1919 log.go:172] (0xc00037a000) (3) Data frame sent\nI0321 11:52:07.970777 1919 log.go:172] (0xc0006e84d0) Data frame received for 3\nI0321 11:52:07.970798 1919 log.go:172] (0xc00037a000) (3) Data frame handling\nI0321 11:52:07.972471 1919 log.go:172] (0xc0006e84d0) Data frame received for 1\nI0321 11:52:07.972500 1919 log.go:172] (0xc00030d2c0) (1) Data frame handling\nI0321 11:52:07.972519 1919 log.go:172] (0xc00030d2c0) (1) Data frame sent\nI0321 11:52:07.972533 1919 log.go:172] (0xc0006e84d0) (0xc00030d2c0) Stream removed, broadcasting: 1\nI0321 11:52:07.972563 1919 log.go:172] (0xc0006e84d0) Go away received\nI0321 11:52:07.972749 1919 log.go:172] (0xc0006e84d0) (0xc00030d2c0) Stream removed, broadcasting: 1\nI0321 11:52:07.972767 1919 log.go:172] (0xc0006e84d0) (0xc00037a000) Stream removed, broadcasting: 3\nI0321 11:52:07.972780 1919 log.go:172] (0xc0006e84d0) (0xc00037a0a0) Stream removed, broadcasting: 5\n" Mar 21 11:52:07.977: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:52:07.977: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 11:52:07.981: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 21 11:52:17.986: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 11:52:17.986: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 11:52:18.004: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999559s Mar 21 11:52:19.009: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992106055s Mar 21 11:52:20.014: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987138319s Mar 21 11:52:21.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982282595s Mar 21 11:52:22.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977349141s Mar 21 11:52:23.027: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972484014s Mar 21 11:52:24.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968603514s Mar 21 11:52:25.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963901943s Mar 21 11:52:26.042: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959083653s Mar 21 11:52:27.047: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.266447ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pbcwt Mar 21 11:52:28.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:52:28.242: INFO: stderr: "I0321 11:52:28.181281 1943 log.go:172] (0xc00013a6e0) (0xc000730640) Create stream\nI0321 11:52:28.181339 1943 log.go:172] (0xc00013a6e0) (0xc000730640) Stream added, broadcasting: 1\nI0321 11:52:28.183329 1943 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0321 11:52:28.183370 1943 log.go:172] (0xc00013a6e0) (0xc0006b0c80) Create stream\nI0321 11:52:28.183383 1943 log.go:172] (0xc00013a6e0) (0xc0006b0c80) Stream added, broadcasting: 3\nI0321 11:52:28.184107 1943 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0321 11:52:28.184146 1943 log.go:172] (0xc00013a6e0) (0xc00050a000) Create stream\nI0321 11:52:28.184160 1943 log.go:172] (0xc00013a6e0) (0xc00050a000) Stream added, broadcasting: 5\nI0321 11:52:28.185101 1943 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0321 11:52:28.236485 1943 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0321 11:52:28.236543 1943 log.go:172] (0xc00050a000) (5) Data frame handling\nI0321 11:52:28.236602 1943 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0321 11:52:28.236624 1943 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0321 11:52:28.236653 1943 log.go:172] (0xc0006b0c80) (3) Data frame sent\nI0321 11:52:28.236673 1943 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0321 11:52:28.236690 1943 log.go:172] (0xc0006b0c80) (3) Data frame handling\nI0321 11:52:28.238401 1943 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0321 11:52:28.238433 1943 log.go:172] (0xc000730640) (1) Data frame handling\nI0321 11:52:28.238448 1943 log.go:172] (0xc000730640) (1) Data frame sent\nI0321 11:52:28.238466 1943 log.go:172] (0xc00013a6e0) (0xc000730640) Stream removed, broadcasting: 1\nI0321 11:52:28.238566 1943 log.go:172] (0xc00013a6e0) Go away received\nI0321 11:52:28.238781 1943 log.go:172] (0xc00013a6e0) (0xc000730640) Stream removed, broadcasting: 1\nI0321 11:52:28.238808 1943 log.go:172] (0xc00013a6e0) (0xc0006b0c80) Stream removed, broadcasting: 3\nI0321 11:52:28.238822 1943 log.go:172] (0xc00013a6e0) (0xc00050a000) Stream removed, broadcasting: 5\n" Mar 21 11:52:28.242: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:52:28.242: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:52:28.246: INFO: Found 1 stateful pods, waiting for 3 Mar 21 11:52:38.251: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 11:52:38.251: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 11:52:38.251: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 21 11:52:38.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:52:38.471: INFO: stderr: "I0321 11:52:38.399907 1965 log.go:172] (0xc00014c840) (0xc0005b9360) Create stream\nI0321 11:52:38.399981 1965 log.go:172] (0xc00014c840) (0xc0005b9360) Stream added, broadcasting: 1\nI0321 11:52:38.407641 1965 log.go:172] (0xc00014c840) Reply frame received for 1\nI0321 11:52:38.407704 1965 log.go:172] (0xc00014c840) (0xc00076e000) Create stream\nI0321 11:52:38.407716 1965 log.go:172] (0xc00014c840) (0xc00076e000) Stream added, broadcasting: 3\nI0321 11:52:38.408549 1965 log.go:172] (0xc00014c840) Reply frame received for 3\nI0321 11:52:38.408591 1965 log.go:172] (0xc00014c840) (0xc000590000) Create stream\nI0321 11:52:38.408606 1965 log.go:172] (0xc00014c840) (0xc000590000) Stream added, broadcasting: 5\nI0321 11:52:38.409447 1965 log.go:172] (0xc00014c840) Reply frame received for 5\nI0321 11:52:38.464980 1965 log.go:172] (0xc00014c840) Data frame received for 5\nI0321 11:52:38.465006 1965 log.go:172] (0xc000590000) (5) Data frame handling\nI0321 11:52:38.465051 1965 log.go:172] (0xc00014c840) Data frame received for 3\nI0321 11:52:38.465081 1965 log.go:172] (0xc00076e000) (3) Data frame handling\nI0321 11:52:38.465215 1965 log.go:172] (0xc00076e000) (3) Data frame sent\nI0321 11:52:38.465429 1965 log.go:172] (0xc00014c840) Data frame received for 3\nI0321 11:52:38.465452 1965 log.go:172] (0xc00076e000) (3) Data frame handling\nI0321 11:52:38.466981 1965 log.go:172] (0xc00014c840) Data frame received for 1\nI0321 11:52:38.467026 1965 log.go:172] (0xc0005b9360) (1) Data frame handling\nI0321 11:52:38.467065 1965 log.go:172] (0xc0005b9360) (1) Data frame sent\nI0321 11:52:38.467102 1965 log.go:172] (0xc00014c840) (0xc0005b9360) Stream removed, broadcasting: 1\nI0321 11:52:38.467129 1965 log.go:172] (0xc00014c840) Go away received\nI0321 11:52:38.467366 1965 log.go:172] (0xc00014c840) (0xc0005b9360) Stream removed, broadcasting: 1\nI0321 11:52:38.467396 1965 log.go:172] (0xc00014c840) (0xc00076e000) Stream removed, broadcasting: 3\nI0321 11:52:38.467409 1965 log.go:172] (0xc00014c840) (0xc000590000) Stream removed, broadcasting: 5\n" Mar 21 11:52:38.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:52:38.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 11:52:38.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:52:38.746: INFO: stderr: "I0321 11:52:38.602345 1988 log.go:172] (0xc000796160) (0xc000708640) Create stream\nI0321 11:52:38.602417 1988 log.go:172] (0xc000796160) (0xc000708640) Stream added, broadcasting: 1\nI0321 11:52:38.605085 1988 log.go:172] (0xc000796160) Reply frame received for 1\nI0321 11:52:38.605292 1988 log.go:172] (0xc000796160) (0xc0006ae000) Create stream\nI0321 11:52:38.605316 1988 log.go:172] (0xc000796160) (0xc0006ae000) Stream added, broadcasting: 3\nI0321 11:52:38.606510 1988 log.go:172] (0xc000796160) Reply frame received for 3\nI0321 11:52:38.606566 1988 log.go:172] (0xc000796160) (0xc000212dc0) Create stream\nI0321 11:52:38.606591 1988 log.go:172] (0xc000796160) (0xc000212dc0) Stream added, broadcasting: 5\nI0321 11:52:38.607597 1988 log.go:172] (0xc000796160) Reply frame received for 5\nI0321 11:52:38.741907 1988 log.go:172] (0xc000796160) Data frame received for 3\nI0321 11:52:38.741946 1988 log.go:172] (0xc0006ae000) (3) Data frame handling\nI0321 11:52:38.741966 1988 log.go:172] (0xc0006ae000) (3) Data frame sent\nI0321 11:52:38.742110 1988 log.go:172] (0xc000796160) Data frame received for 5\nI0321 11:52:38.742122 1988 log.go:172] (0xc000212dc0) (5) Data frame handling\nI0321 11:52:38.742275 1988 log.go:172] (0xc000796160) Data frame received for 3\nI0321 11:52:38.742290 1988 log.go:172] (0xc0006ae000) (3) Data frame handling\nI0321 11:52:38.743567 1988 log.go:172] (0xc000796160) Data frame received for 1\nI0321 11:52:38.743584 1988 log.go:172] (0xc000708640) (1) Data frame handling\nI0321 11:52:38.743599 1988 log.go:172] (0xc000708640) (1) Data frame sent\nI0321 11:52:38.743621 1988 log.go:172] (0xc000796160) (0xc000708640) Stream removed, broadcasting: 1\nI0321 11:52:38.743638 1988 log.go:172] (0xc000796160) Go away received\nI0321 11:52:38.743771 1988 log.go:172] (0xc000796160) (0xc000708640) Stream removed, broadcasting: 1\nI0321 11:52:38.743789 1988 log.go:172] (0xc000796160) (0xc0006ae000) Stream removed, broadcasting: 3\nI0321 11:52:38.743795 1988 log.go:172] (0xc000796160) (0xc000212dc0) Stream removed, broadcasting: 5\n" Mar 21 11:52:38.746: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:52:38.746: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 11:52:38.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 11:52:38.993: INFO: stderr: "I0321 11:52:38.874915 2011 log.go:172] (0xc00013a790) (0xc000774640) Create stream\nI0321 11:52:38.874963 2011 log.go:172] (0xc00013a790) (0xc000774640) Stream added, broadcasting: 1\nI0321 11:52:38.876903 2011 log.go:172] (0xc00013a790) Reply frame received for 1\nI0321 11:52:38.876942 2011 log.go:172] (0xc00013a790) (0xc0006b6dc0) Create stream\nI0321 11:52:38.876970 2011 log.go:172] (0xc00013a790) (0xc0006b6dc0) Stream added, broadcasting: 3\nI0321 11:52:38.878164 2011 log.go:172] (0xc00013a790) Reply frame received for 3\nI0321 11:52:38.878216 2011 log.go:172] (0xc00013a790) (0xc0007746e0) Create stream\nI0321 11:52:38.878229 2011 log.go:172] (0xc00013a790) (0xc0007746e0) Stream added, broadcasting: 5\nI0321 11:52:38.879112 2011 log.go:172] (0xc00013a790) Reply frame received for 5\nI0321 11:52:38.988129 2011 log.go:172] (0xc00013a790) Data frame received for 5\nI0321 11:52:38.988186 2011 log.go:172] (0xc00013a790) Data frame received for 3\nI0321 11:52:38.988204 2011 log.go:172] (0xc0006b6dc0) (3) Data frame handling\nI0321 11:52:38.988217 2011 log.go:172] (0xc0006b6dc0) (3) Data frame sent\nI0321 11:52:38.988236 2011 log.go:172] (0xc0007746e0) (5) Data frame handling\nI0321 11:52:38.988351 2011 log.go:172] (0xc00013a790) Data frame received for 3\nI0321 11:52:38.988365 2011 log.go:172] (0xc0006b6dc0) (3) Data frame handling\nI0321 11:52:38.990629 2011 log.go:172] (0xc00013a790) Data frame received for 1\nI0321 11:52:38.990674 2011 log.go:172] (0xc000774640) (1) Data frame handling\nI0321 11:52:38.990696 2011 log.go:172] (0xc000774640) (1) Data frame sent\nI0321 11:52:38.990714 2011 log.go:172] (0xc00013a790) (0xc000774640) Stream removed, broadcasting: 1\nI0321 11:52:38.990731 2011 log.go:172] (0xc00013a790) Go away received\nI0321 11:52:38.990955 2011 log.go:172] (0xc00013a790) (0xc000774640) Stream removed, broadcasting: 1\nI0321 11:52:38.990987 2011 log.go:172] (0xc00013a790) (0xc0006b6dc0) Stream removed, broadcasting: 3\nI0321 11:52:38.991007 2011 log.go:172] (0xc00013a790) (0xc0007746e0) Stream removed, broadcasting: 5\n" Mar 21 11:52:38.994: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 11:52:38.994: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 11:52:38.994: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 11:52:38.997: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 21 11:52:49.006: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 11:52:49.006: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 21 11:52:49.006: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 21 11:52:49.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999505s Mar 21 11:52:50.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995636433s Mar 21 11:52:51.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99030956s Mar 21 11:52:52.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98511483s Mar 21 11:52:53.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980580387s Mar 21 11:52:54.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975601467s Mar 21 11:52:55.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970438716s Mar 21 11:52:56.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964759344s Mar 21 11:52:57.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958920644s Mar 21 11:52:58.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.748383ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pbcwt Mar 21 11:52:59.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:52:59.299: INFO: stderr: "I0321 11:52:59.198562 2034 log.go:172] (0xc00015c6e0) (0xc0005c9400) Create stream\nI0321 11:52:59.198622 2034 log.go:172] (0xc00015c6e0) (0xc0005c9400) Stream added, broadcasting: 1\nI0321 11:52:59.201922 2034 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0321 11:52:59.201991 2034 log.go:172] (0xc00015c6e0) (0xc000766000) Create stream\nI0321 11:52:59.202117 2034 log.go:172] (0xc00015c6e0) (0xc000766000) Stream added, broadcasting: 3\nI0321 11:52:59.203219 2034 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0321 11:52:59.203269 2034 log.go:172] (0xc00015c6e0) (0xc0005c94a0) Create stream\nI0321 11:52:59.203288 2034 log.go:172] (0xc00015c6e0) (0xc0005c94a0) Stream added, broadcasting: 5\nI0321 11:52:59.204446 2034 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0321 11:52:59.294589 2034 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0321 11:52:59.294624 2034 log.go:172] (0xc0005c94a0) (5) Data frame handling\nI0321 11:52:59.294647 2034 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0321 11:52:59.294656 2034 log.go:172] (0xc000766000) (3) Data frame handling\nI0321 11:52:59.294665 2034 log.go:172] (0xc000766000) (3) Data frame sent\nI0321 11:52:59.294674 2034 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0321 11:52:59.294681 2034 log.go:172] (0xc000766000) (3) Data frame handling\nI0321 11:52:59.296278 2034 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0321 11:52:59.296312 2034 log.go:172] (0xc0005c9400) (1) Data frame handling\nI0321 11:52:59.296346 2034 log.go:172] (0xc0005c9400) (1) Data frame sent\nI0321 11:52:59.296368 2034 log.go:172] (0xc00015c6e0) (0xc0005c9400) Stream removed, broadcasting: 1\nI0321 11:52:59.296397 2034 log.go:172] (0xc00015c6e0) Go away received\nI0321 11:52:59.296801 2034 log.go:172] (0xc00015c6e0) (0xc0005c9400) Stream removed, broadcasting: 1\nI0321 11:52:59.296820 2034 log.go:172] (0xc00015c6e0) (0xc000766000) Stream removed, broadcasting: 3\nI0321 11:52:59.296831 2034 log.go:172] (0xc00015c6e0) (0xc0005c94a0) Stream removed, broadcasting: 5\n" Mar 21 11:52:59.300: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:52:59.300: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:52:59.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:52:59.487: INFO: stderr: "I0321 11:52:59.430317 2056 log.go:172] (0xc000138840) (0xc000756640) Create stream\nI0321 11:52:59.430374 2056 log.go:172] (0xc000138840) (0xc000756640) Stream added, broadcasting: 1\nI0321 11:52:59.432119 2056 log.go:172] (0xc000138840) Reply frame received for 1\nI0321 11:52:59.432160 2056 log.go:172] (0xc000138840) (0xc000570e60) Create stream\nI0321 11:52:59.432172 2056 log.go:172] (0xc000138840) (0xc000570e60) Stream added, broadcasting: 3\nI0321 11:52:59.432849 2056 log.go:172] (0xc000138840) Reply frame received for 3\nI0321 11:52:59.432871 2056 log.go:172] (0xc000138840) (0xc000570fa0) Create stream\nI0321 11:52:59.432879 2056 log.go:172] (0xc000138840) (0xc000570fa0) Stream added, broadcasting: 5\nI0321 11:52:59.433800 2056 log.go:172] (0xc000138840) Reply frame received for 5\nI0321 11:52:59.481747 2056 log.go:172] (0xc000138840) Data frame received for 5\nI0321 11:52:59.481838 2056 log.go:172] (0xc000138840) Data frame received for 3\nI0321 11:52:59.481878 2056 log.go:172] (0xc000570e60) (3) Data frame handling\nI0321 11:52:59.481895 2056 log.go:172] (0xc000570e60) (3) Data frame sent\nI0321 11:52:59.481905 2056 log.go:172] (0xc000138840) Data frame received for 3\nI0321 11:52:59.481924 2056 log.go:172] (0xc000570fa0) (5) Data frame handling\nI0321 11:52:59.482046 2056 log.go:172] (0xc000570e60) (3) Data frame handling\nI0321 11:52:59.483276 2056 log.go:172] (0xc000138840) Data frame received for 1\nI0321 11:52:59.483302 2056 log.go:172] (0xc000756640) (1) Data frame handling\nI0321 11:52:59.483314 2056 log.go:172] (0xc000756640) (1) Data frame sent\nI0321 11:52:59.483335 2056 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0321 11:52:59.483371 2056 log.go:172] (0xc000138840) Go away received\nI0321 11:52:59.483585 2056 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0321 11:52:59.483619 2056 log.go:172] (0xc000138840) (0xc000570e60) Stream removed, broadcasting: 3\nI0321 11:52:59.483637 2056 log.go:172] (0xc000138840) (0xc000570fa0) Stream removed, broadcasting: 5\n" Mar 21 11:52:59.487: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:52:59.487: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:52:59.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pbcwt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 11:53:00.176: INFO: stderr: "I0321 11:53:00.115483 2079 log.go:172] (0xc00084e2c0) (0xc00071e640) Create stream\nI0321 11:53:00.115556 2079 log.go:172] (0xc00084e2c0) (0xc00071e640) Stream added, broadcasting: 1\nI0321 11:53:00.117649 2079 log.go:172] (0xc00084e2c0) Reply frame received for 1\nI0321 11:53:00.117689 2079 log.go:172] (0xc00084e2c0) (0xc0001a2e60) Create stream\nI0321 11:53:00.117700 2079 log.go:172] (0xc00084e2c0) (0xc0001a2e60) Stream added, broadcasting: 3\nI0321 11:53:00.118453 2079 log.go:172] (0xc00084e2c0) Reply frame received for 3\nI0321 11:53:00.118490 2079 log.go:172] (0xc00084e2c0) (0xc00044e000) Create stream\nI0321 11:53:00.118502 2079 log.go:172] (0xc00084e2c0) (0xc00044e000) Stream added, broadcasting: 5\nI0321 11:53:00.119216 2079 log.go:172] (0xc00084e2c0) Reply frame received for 5\nI0321 11:53:00.171853 2079 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0321 11:53:00.171883 2079 log.go:172] (0xc0001a2e60) (3) Data frame handling\nI0321 11:53:00.171893 2079 log.go:172] (0xc0001a2e60) (3) Data frame sent\nI0321 11:53:00.171898 2079 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0321 11:53:00.171902 2079 log.go:172] (0xc0001a2e60) (3) Data frame handling\nI0321 11:53:00.171922 2079 log.go:172] (0xc00084e2c0) Data frame received for 5\nI0321 11:53:00.171927 2079 log.go:172] (0xc00044e000) (5) Data frame handling\nI0321 11:53:00.173026 2079 log.go:172] (0xc00084e2c0) Data frame received for 1\nI0321 11:53:00.173057 2079 log.go:172] (0xc00071e640) (1) Data frame handling\nI0321 11:53:00.173083 2079 log.go:172] (0xc00071e640) (1) Data frame sent\nI0321 11:53:00.173106 2079 log.go:172] (0xc00084e2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0321 11:53:00.173260 2079 log.go:172] (0xc00084e2c0) Go away received\nI0321 11:53:00.173417 2079 log.go:172] (0xc00084e2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0321 11:53:00.173434 2079 log.go:172] (0xc00084e2c0) (0xc0001a2e60) Stream removed, broadcasting: 3\nI0321 11:53:00.173444 2079 log.go:172] (0xc00084e2c0) (0xc00044e000) Stream removed, broadcasting: 5\n" Mar 21 11:53:00.176: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 11:53:00.176: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 11:53:00.176: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 21 11:53:30.191: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pbcwt Mar 21 11:53:30.193: INFO: Scaling statefulset ss to 0 Mar 21 11:53:30.202: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 11:53:30.203: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:53:30.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pbcwt" for this suite. Mar 21 11:53:36.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:53:36.282: INFO: namespace: e2e-tests-statefulset-pbcwt, resource: bindings, ignored listing per whitelist Mar 21 11:53:36.309: INFO: namespace e2e-tests-statefulset-pbcwt deletion completed in 6.090578632s • [SLOW TEST:98.700 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:53:36.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 21 11:53:36.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nxv27' Mar 21 11:53:36.696: INFO: stderr: "" Mar 21 11:53:36.696: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 21 11:53:37.700: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:37.700: INFO: Found 0 / 1 Mar 21 11:53:38.701: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:38.701: INFO: Found 0 / 1 Mar 21 11:53:39.701: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:39.701: INFO: Found 0 / 1 Mar 21 11:53:40.701: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:40.701: INFO: Found 1 / 1 Mar 21 11:53:40.701: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 21 11:53:40.705: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:40.705: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 11:53:40.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wqvfp --namespace=e2e-tests-kubectl-nxv27 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 21 11:53:40.807: INFO: stderr: "" Mar 21 11:53:40.807: INFO: stdout: "pod/redis-master-wqvfp patched\n" STEP: checking annotations Mar 21 11:53:40.826: INFO: Selector matched 1 pods for map[app:redis] Mar 21 11:53:40.826: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:53:40.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nxv27" for this suite. Mar 21 11:54:02.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:54:02.890: INFO: namespace: e2e-tests-kubectl-nxv27, resource: bindings, ignored listing per whitelist Mar 21 11:54:02.926: INFO: namespace e2e-tests-kubectl-nxv27 deletion completed in 22.096300388s • [SLOW TEST:26.617 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:54:02.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:54:03.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-d8j76" to be "success or failure" Mar 21 11:54:03.035: INFO: Pod "downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.836999ms Mar 21 11:54:05.040: INFO: Pod "downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008528564s Mar 21 11:54:07.044: INFO: Pod "downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012299137s STEP: Saw pod success Mar 21 11:54:07.044: INFO: Pod "downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:54:07.046: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:54:07.077: INFO: Waiting for pod downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:54:07.102: INFO: Pod downwardapi-volume-a8c1e366-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:54:07.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d8j76" for this suite. Mar 21 11:54:13.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:54:13.174: INFO: namespace: e2e-tests-projected-d8j76, resource: bindings, ignored listing per whitelist Mar 21 11:54:13.203: INFO: namespace e2e-tests-projected-d8j76 deletion completed in 6.097312047s • [SLOW TEST:10.277 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:54:13.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-aee6253a-6b6a-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:54:13.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-ls6qt" to be "success or failure" Mar 21 11:54:13.351: INFO: Pod "pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.755484ms Mar 21 11:54:15.466: INFO: Pod "pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13067689s Mar 21 11:54:17.471: INFO: Pod "pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134813724s STEP: Saw pod success Mar 21 11:54:17.471: INFO: Pod "pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:54:17.474: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 11:54:17.491: INFO: Waiting for pod pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:54:17.495: INFO: Pod pod-configmaps-aee6b3da-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:54:17.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ls6qt" for this suite. Mar 21 11:54:23.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:54:23.561: INFO: namespace: e2e-tests-configmap-ls6qt, resource: bindings, ignored listing per whitelist Mar 21 11:54:23.605: INFO: namespace e2e-tests-configmap-ls6qt deletion completed in 6.107105571s • [SLOW TEST:10.402 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:54:23.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0321 11:54:54.273933 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 11:54:54.274: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:54:54.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4g8p5" for this suite. Mar 21 11:55:00.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:55:00.353: INFO: namespace: e2e-tests-gc-4g8p5, resource: bindings, ignored listing per whitelist Mar 21 11:55:00.356: INFO: namespace e2e-tests-gc-4g8p5 deletion completed in 6.078437998s • [SLOW TEST:36.750 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:55:00.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:55:00.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-64zgx" to be "success or failure" Mar 21 11:55:00.497: INFO: Pod "downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649507ms Mar 21 11:55:02.502: INFO: Pod "downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138953s Mar 21 11:55:04.677: INFO: Pod "downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183299651s STEP: Saw pod success Mar 21 11:55:04.677: INFO: Pod "downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:55:04.680: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:55:04.948: INFO: Waiting for pod downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:55:04.954: INFO: Pod downwardapi-volume-cb0175da-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:55:04.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-64zgx" for this suite. Mar 21 11:55:10.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:55:10.995: INFO: namespace: e2e-tests-projected-64zgx, resource: bindings, ignored listing per whitelist Mar 21 11:55:11.061: INFO: namespace e2e-tests-projected-64zgx deletion completed in 6.103523176s • [SLOW TEST:10.705 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:55:11.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:55:11.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-tbbrc" to be "success or failure" Mar 21 11:55:11.180: INFO: Pod "downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.888259ms Mar 21 11:55:13.184: INFO: Pod "downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039090169s Mar 21 11:55:15.210: INFO: Pod "downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064357234s STEP: Saw pod success Mar 21 11:55:15.210: INFO: Pod "downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:55:15.212: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:55:15.228: INFO: Waiting for pod downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:55:15.233: INFO: Pod downwardapi-volume-d15c1ad6-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:55:15.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbbrc" for this suite. Mar 21 11:55:21.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:55:21.279: INFO: namespace: e2e-tests-projected-tbbrc, resource: bindings, ignored listing per whitelist Mar 21 11:55:21.322: INFO: namespace e2e-tests-projected-tbbrc deletion completed in 6.086258612s • [SLOW TEST:10.261 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:55:21.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:55:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tg24z" for this suite. Mar 21 11:56:15.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:56:15.579: INFO: namespace: e2e-tests-kubelet-test-tg24z, resource: bindings, ignored listing per whitelist Mar 21 11:56:15.650: INFO: namespace e2e-tests-kubelet-test-tg24z deletion completed in 50.162850612s • [SLOW TEST:54.327 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:56:15.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 21 11:56:15.762: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 21 11:56:20.766: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:56:21.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-vl5lj" for this suite. Mar 21 11:56:27.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:56:27.967: INFO: namespace: e2e-tests-replication-controller-vl5lj, resource: bindings, ignored listing per whitelist Mar 21 11:56:28.001: INFO: namespace e2e-tests-replication-controller-vl5lj deletion completed in 6.080707516s • [SLOW TEST:12.351 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:56:28.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-ff3da2b0-6b6a-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 11:56:28.142: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-vfr9c" to be "success or failure" Mar 21 11:56:28.157: INFO: Pod "pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.557662ms Mar 21 11:56:30.162: INFO: Pod "pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020045262s Mar 21 11:56:32.166: INFO: Pod "pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023631376s STEP: Saw pod success Mar 21 11:56:32.166: INFO: Pod "pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:56:32.168: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 21 11:56:32.199: INFO: Waiting for pod pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f to disappear Mar 21 11:56:32.229: INFO: Pod pod-projected-secrets-ff3f98b1-6b6a-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:56:32.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vfr9c" for this suite. Mar 21 11:56:38.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:56:38.280: INFO: namespace: e2e-tests-projected-vfr9c, resource: bindings, ignored listing per whitelist Mar 21 11:56:38.326: INFO: namespace e2e-tests-projected-vfr9c deletion completed in 6.093186252s • [SLOW TEST:10.325 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:56:38.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 11:56:38.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-22pwd" to be "success or failure" Mar 21 11:56:38.427: INFO: Pod "downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.426926ms Mar 21 11:56:40.430: INFO: Pod "downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006638688s Mar 21 11:56:42.434: INFO: Pod "downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010970901s STEP: Saw pod success Mar 21 11:56:42.435: INFO: Pod "downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:56:42.438: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 11:56:42.484: INFO: Waiting for pod downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 11:56:42.499: INFO: Pod downwardapi-volume-0560b02b-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:56:42.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-22pwd" for this suite. Mar 21 11:56:48.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:56:48.541: INFO: namespace: e2e-tests-downward-api-22pwd, resource: bindings, ignored listing per whitelist Mar 21 11:56:48.599: INFO: namespace e2e-tests-downward-api-22pwd deletion completed in 6.096306608s • [SLOW TEST:10.272 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:56:48.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0b86b84c-6b6b-11ea-946c-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-0b86b8bc-6b6b-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0b86b84c-6b6b-11ea-946c-0242ac11000f STEP: Updating configmap cm-test-opt-upd-0b86b8bc-6b6b-11ea-946c-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-0b86b8f6-6b6b-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:58:07.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gs744" for this suite. Mar 21 11:58:29.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:58:29.306: INFO: namespace: e2e-tests-configmap-gs744, resource: bindings, ignored listing per whitelist Mar 21 11:58:29.333: INFO: namespace e2e-tests-configmap-gs744 deletion completed in 22.108828802s • [SLOW TEST:100.733 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:58:29.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 11:58:33.527: INFO: Waiting up to 5m0s for pod "client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-pods-l24hg" to be "success or failure" Mar 21 11:58:33.557: INFO: Pod "client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.734102ms Mar 21 11:58:35.561: INFO: Pod "client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034264575s Mar 21 11:58:37.565: INFO: Pod "client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038646602s STEP: Saw pod success Mar 21 11:58:37.565: INFO: Pod "client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:58:37.568: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f container env3cont: STEP: delete the pod Mar 21 11:58:37.588: INFO: Waiting for pod client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 11:58:37.592: INFO: Pod client-envvars-49fbcf95-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:58:37.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-l24hg" for this suite. Mar 21 11:59:17.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:59:17.637: INFO: namespace: e2e-tests-pods-l24hg, resource: bindings, ignored listing per whitelist Mar 21 11:59:17.691: INFO: namespace e2e-tests-pods-l24hg deletion completed in 40.094428336s • [SLOW TEST:48.358 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:59:17.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-qdw7x STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qdw7x to expose endpoints map[] Mar 21 11:59:17.839: INFO: Get endpoints failed (2.902044ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 21 11:59:18.841: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qdw7x exposes endpoints map[] (1.005100112s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-qdw7x STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qdw7x to expose endpoints map[pod1:[100]] Mar 21 11:59:22.881: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qdw7x exposes endpoints map[pod1:[100]] (4.03478831s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-qdw7x STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qdw7x to expose endpoints map[pod2:[101] pod1:[100]] Mar 21 11:59:25.942: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qdw7x exposes endpoints map[pod1:[100] pod2:[101]] (3.056727444s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-qdw7x STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qdw7x to expose endpoints map[pod2:[101]] Mar 21 11:59:26.990: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qdw7x exposes endpoints map[pod2:[101]] (1.043456281s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-qdw7x STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qdw7x to expose endpoints map[] Mar 21 11:59:27.024: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qdw7x exposes endpoints map[] (29.80381ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:59:27.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qdw7x" for this suite. Mar 21 11:59:33.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:59:33.154: INFO: namespace: e2e-tests-services-qdw7x, resource: bindings, ignored listing per whitelist Mar 21 11:59:33.182: INFO: namespace e2e-tests-services-qdw7x deletion completed in 6.091488722s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.491 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:59:33.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6d9e6947-6b6b-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 11:59:33.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-b2czl" to be "success or failure" Mar 21 11:59:33.330: INFO: Pod "pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.79585ms Mar 21 11:59:35.334: INFO: Pod "pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007964304s Mar 21 11:59:37.339: INFO: Pod "pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012601397s STEP: Saw pod success Mar 21 11:59:37.339: INFO: Pod "pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 11:59:37.342: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 11:59:37.361: INFO: Waiting for pod pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 11:59:37.390: INFO: Pod pod-projected-configmaps-6da16a6e-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:59:37.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b2czl" for this suite. Mar 21 11:59:43.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:59:43.469: INFO: namespace: e2e-tests-projected-b2czl, resource: bindings, ignored listing per whitelist Mar 21 11:59:43.478: INFO: namespace e2e-tests-projected-b2czl deletion completed in 6.08475696s • [SLOW TEST:10.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:59:43.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 21 11:59:48.182: INFO: Successfully updated pod "pod-update-activedeadlineseconds-73bcf471-6b6b-11ea-946c-0242ac11000f" Mar 21 11:59:48.182: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-73bcf471-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-pods-bkp4x" to be "terminated due to deadline exceeded" Mar 21 11:59:48.186: INFO: Pod "pod-update-activedeadlineseconds-73bcf471-6b6b-11ea-946c-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 3.466945ms Mar 21 11:59:50.190: INFO: Pod "pod-update-activedeadlineseconds-73bcf471-6b6b-11ea-946c-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007878968s Mar 21 11:59:50.190: INFO: Pod "pod-update-activedeadlineseconds-73bcf471-6b6b-11ea-946c-0242ac11000f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 11:59:50.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bkp4x" for this suite. Mar 21 11:59:56.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 11:59:56.273: INFO: namespace: e2e-tests-pods-bkp4x, resource: bindings, ignored listing per whitelist Mar 21 11:59:56.279: INFO: namespace e2e-tests-pods-bkp4x deletion completed in 6.084759681s • [SLOW TEST:12.800 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 11:59:56.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 21 12:00:04.455: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:04.477: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:06.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:06.483: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:08.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:08.482: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:10.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:10.502: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:12.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:12.482: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:14.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:14.482: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:16.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:16.481: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:18.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:18.481: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:20.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:20.482: INFO: Pod pod-with-poststart-exec-hook still exists Mar 21 12:00:22.478: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 21 12:00:22.807: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:00:22.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9brfl" for this suite. Mar 21 12:00:44.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:00:44.938: INFO: namespace: e2e-tests-container-lifecycle-hook-9brfl, resource: bindings, ignored listing per whitelist Mar 21 12:00:45.004: INFO: namespace e2e-tests-container-lifecycle-hook-9brfl deletion completed in 22.147768349s • [SLOW TEST:48.725 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:00:45.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-98702171-6b6b-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:00:45.154: INFO: Waiting up to 5m0s for pod "pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-sjv2t" to be "success or failure" Mar 21 12:00:45.171: INFO: Pod "pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.317357ms Mar 21 12:00:47.214: INFO: Pod "pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060138063s Mar 21 12:00:49.219: INFO: Pod "pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064427844s STEP: Saw pod success Mar 21 12:00:49.219: INFO: Pod "pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:00:49.221: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 12:00:49.244: INFO: Waiting for pod pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 12:00:49.286: INFO: Pod pod-configmaps-9871828d-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:00:49.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sjv2t" for this suite. Mar 21 12:00:55.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:00:55.320: INFO: namespace: e2e-tests-configmap-sjv2t, resource: bindings, ignored listing per whitelist Mar 21 12:00:55.384: INFO: namespace e2e-tests-configmap-sjv2t deletion completed in 6.094822037s • [SLOW TEST:10.380 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:00:55.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-m2zkh STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 12:00:55.473: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 12:01:22.027: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=http&host=10.244.1.148&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-m2zkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:01:22.027: INFO: >>> kubeConfig: /root/.kube/config I0321 12:01:22.066849 6 log.go:172] (0xc0009f1810) (0xc0011e1ea0) Create stream I0321 12:01:22.066878 6 log.go:172] (0xc0009f1810) (0xc0011e1ea0) Stream added, broadcasting: 1 I0321 12:01:22.069333 6 log.go:172] (0xc0009f1810) Reply frame received for 1 I0321 12:01:22.069376 6 log.go:172] (0xc0009f1810) (0xc0011e1f40) Create stream I0321 12:01:22.069392 6 log.go:172] (0xc0009f1810) (0xc0011e1f40) Stream added, broadcasting: 3 I0321 12:01:22.070415 6 log.go:172] (0xc0009f1810) Reply frame received for 3 I0321 12:01:22.070474 6 log.go:172] (0xc0009f1810) (0xc00142ee60) Create stream I0321 12:01:22.070502 6 log.go:172] (0xc0009f1810) (0xc00142ee60) Stream added, broadcasting: 5 I0321 12:01:22.071468 6 log.go:172] (0xc0009f1810) Reply frame received for 5 I0321 12:01:22.161692 6 log.go:172] (0xc0009f1810) Data frame received for 3 I0321 12:01:22.161744 6 log.go:172] (0xc0011e1f40) (3) Data frame handling I0321 12:01:22.161774 6 log.go:172] (0xc0011e1f40) (3) Data frame sent I0321 12:01:22.161797 6 log.go:172] (0xc0009f1810) Data frame received for 3 I0321 12:01:22.161815 6 log.go:172] (0xc0011e1f40) (3) Data frame handling I0321 12:01:22.161944 6 log.go:172] (0xc0009f1810) Data frame received for 5 I0321 12:01:22.161968 6 log.go:172] (0xc00142ee60) (5) Data frame handling I0321 12:01:22.163456 6 log.go:172] (0xc0009f1810) Data frame received for 1 I0321 12:01:22.163483 6 log.go:172] (0xc0011e1ea0) (1) Data frame handling I0321 12:01:22.163501 6 log.go:172] (0xc0011e1ea0) (1) Data frame sent I0321 12:01:22.163519 6 log.go:172] (0xc0009f1810) (0xc0011e1ea0) Stream removed, broadcasting: 1 I0321 12:01:22.163578 6 log.go:172] (0xc0009f1810) Go away received I0321 12:01:22.163627 6 log.go:172] (0xc0009f1810) (0xc0011e1ea0) Stream removed, broadcasting: 1 I0321 12:01:22.163660 6 log.go:172] (0xc0009f1810) (0xc0011e1f40) Stream removed, broadcasting: 3 I0321 12:01:22.163706 6 log.go:172] (0xc0009f1810) (0xc00142ee60) Stream removed, broadcasting: 5 Mar 21 12:01:22.163: INFO: Waiting for endpoints: map[] Mar 21 12:01:22.167: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.149:8080/dial?request=hostName&protocol=http&host=10.244.2.14&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-m2zkh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:01:22.167: INFO: >>> kubeConfig: /root/.kube/config I0321 12:01:22.197039 6 log.go:172] (0xc001d3e2c0) (0xc001be4460) Create stream I0321 12:01:22.197087 6 log.go:172] (0xc001d3e2c0) (0xc001be4460) Stream added, broadcasting: 1 I0321 12:01:22.199703 6 log.go:172] (0xc001d3e2c0) Reply frame received for 1 I0321 12:01:22.199738 6 log.go:172] (0xc001d3e2c0) (0xc001db2000) Create stream I0321 12:01:22.199753 6 log.go:172] (0xc001d3e2c0) (0xc001db2000) Stream added, broadcasting: 3 I0321 12:01:22.200613 6 log.go:172] (0xc001d3e2c0) Reply frame received for 3 I0321 12:01:22.200666 6 log.go:172] (0xc001d3e2c0) (0xc00142ef00) Create stream I0321 12:01:22.200691 6 log.go:172] (0xc001d3e2c0) (0xc00142ef00) Stream added, broadcasting: 5 I0321 12:01:22.201559 6 log.go:172] (0xc001d3e2c0) Reply frame received for 5 I0321 12:01:22.257624 6 log.go:172] (0xc001d3e2c0) Data frame received for 3 I0321 12:01:22.257660 6 log.go:172] (0xc001db2000) (3) Data frame handling I0321 12:01:22.257696 6 log.go:172] (0xc001db2000) (3) Data frame sent I0321 12:01:22.257943 6 log.go:172] (0xc001d3e2c0) Data frame received for 5 I0321 12:01:22.257961 6 log.go:172] (0xc00142ef00) (5) Data frame handling I0321 12:01:22.258076 6 log.go:172] (0xc001d3e2c0) Data frame received for 3 I0321 12:01:22.258095 6 log.go:172] (0xc001db2000) (3) Data frame handling I0321 12:01:22.259595 6 log.go:172] (0xc001d3e2c0) Data frame received for 1 I0321 12:01:22.259612 6 log.go:172] (0xc001be4460) (1) Data frame handling I0321 12:01:22.259627 6 log.go:172] (0xc001be4460) (1) Data frame sent I0321 12:01:22.259645 6 log.go:172] (0xc001d3e2c0) (0xc001be4460) Stream removed, broadcasting: 1 I0321 12:01:22.259725 6 log.go:172] (0xc001d3e2c0) (0xc001be4460) Stream removed, broadcasting: 1 I0321 12:01:22.259743 6 log.go:172] (0xc001d3e2c0) (0xc001db2000) Stream removed, broadcasting: 3 I0321 12:01:22.259760 6 log.go:172] (0xc001d3e2c0) (0xc00142ef00) Stream removed, broadcasting: 5 Mar 21 12:01:22.259: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 I0321 12:01:22.259840 6 log.go:172] (0xc001d3e2c0) Go away received Mar 21 12:01:22.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-m2zkh" for this suite. Mar 21 12:01:46.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:01:46.310: INFO: namespace: e2e-tests-pod-network-test-m2zkh, resource: bindings, ignored listing per whitelist Mar 21 12:01:46.368: INFO: namespace e2e-tests-pod-network-test-m2zkh deletion completed in 24.105263886s • [SLOW TEST:50.984 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:01:46.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 21 12:01:46.480: INFO: Waiting up to 5m0s for pod "var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-var-expansion-jbzlw" to be "success or failure" Mar 21 12:01:46.487: INFO: Pod "var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675805ms Mar 21 12:01:48.491: INFO: Pod "var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010658146s Mar 21 12:01:50.495: INFO: Pod "var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01509205s STEP: Saw pod success Mar 21 12:01:50.495: INFO: Pod "var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:01:50.499: INFO: Trying to get logs from node hunter-worker pod var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 12:01:50.518: INFO: Waiting for pod var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 12:01:50.523: INFO: Pod var-expansion-bcfc3409-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:01:50.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jbzlw" for this suite. Mar 21 12:01:56.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:01:56.617: INFO: namespace: e2e-tests-var-expansion-jbzlw, resource: bindings, ignored listing per whitelist Mar 21 12:01:56.644: INFO: namespace e2e-tests-var-expansion-jbzlw deletion completed in 6.118142943s • [SLOW TEST:10.275 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:01:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 21 12:01:56.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-btl65' Mar 21 12:01:58.974: INFO: stderr: "" Mar 21 12:01:58.974: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 12:01:58.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:01:59.125: INFO: stderr: "" Mar 21 12:01:59.125: INFO: stdout: "update-demo-nautilus-6plww update-demo-nautilus-jhr5q " Mar 21 12:01:59.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6plww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:01:59.237: INFO: stderr: "" Mar 21 12:01:59.237: INFO: stdout: "" Mar 21 12:01:59.237: INFO: update-demo-nautilus-6plww is created but not running Mar 21 12:02:04.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:04.340: INFO: stderr: "" Mar 21 12:02:04.340: INFO: stdout: "update-demo-nautilus-6plww update-demo-nautilus-jhr5q " Mar 21 12:02:04.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6plww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:04.432: INFO: stderr: "" Mar 21 12:02:04.432: INFO: stdout: "true" Mar 21 12:02:04.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6plww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:04.535: INFO: stderr: "" Mar 21 12:02:04.536: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 12:02:04.536: INFO: validating pod update-demo-nautilus-6plww Mar 21 12:02:04.541: INFO: got data: { "image": "nautilus.jpg" } Mar 21 12:02:04.541: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 12:02:04.541: INFO: update-demo-nautilus-6plww is verified up and running Mar 21 12:02:04.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:04.648: INFO: stderr: "" Mar 21 12:02:04.648: INFO: stdout: "true" Mar 21 12:02:04.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:04.743: INFO: stderr: "" Mar 21 12:02:04.743: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 12:02:04.743: INFO: validating pod update-demo-nautilus-jhr5q Mar 21 12:02:04.747: INFO: got data: { "image": "nautilus.jpg" } Mar 21 12:02:04.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 12:02:04.748: INFO: update-demo-nautilus-jhr5q is verified up and running STEP: scaling down the replication controller Mar 21 12:02:04.750: INFO: scanned /root for discovery docs: Mar 21 12:02:04.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:05.894: INFO: stderr: "" Mar 21 12:02:05.894: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 12:02:05.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:06.002: INFO: stderr: "" Mar 21 12:02:06.002: INFO: stdout: "update-demo-nautilus-6plww update-demo-nautilus-jhr5q " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 12:02:11.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:11.111: INFO: stderr: "" Mar 21 12:02:11.111: INFO: stdout: "update-demo-nautilus-6plww update-demo-nautilus-jhr5q " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 12:02:16.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:16.213: INFO: stderr: "" Mar 21 12:02:16.213: INFO: stdout: "update-demo-nautilus-jhr5q " Mar 21 12:02:16.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:16.308: INFO: stderr: "" Mar 21 12:02:16.308: INFO: stdout: "true" Mar 21 12:02:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:16.413: INFO: stderr: "" Mar 21 12:02:16.413: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 12:02:16.413: INFO: validating pod update-demo-nautilus-jhr5q Mar 21 12:02:16.417: INFO: got data: { "image": "nautilus.jpg" } Mar 21 12:02:16.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 12:02:16.417: INFO: update-demo-nautilus-jhr5q is verified up and running STEP: scaling up the replication controller Mar 21 12:02:16.419: INFO: scanned /root for discovery docs: Mar 21 12:02:16.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:17.542: INFO: stderr: "" Mar 21 12:02:17.542: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 12:02:17.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:17.651: INFO: stderr: "" Mar 21 12:02:17.651: INFO: stdout: "update-demo-nautilus-gbqxn update-demo-nautilus-jhr5q " Mar 21 12:02:17.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbqxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:17.750: INFO: stderr: "" Mar 21 12:02:17.750: INFO: stdout: "" Mar 21 12:02:17.750: INFO: update-demo-nautilus-gbqxn is created but not running Mar 21 12:02:22.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:22.855: INFO: stderr: "" Mar 21 12:02:22.855: INFO: stdout: "update-demo-nautilus-gbqxn update-demo-nautilus-jhr5q " Mar 21 12:02:22.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbqxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.006: INFO: stderr: "" Mar 21 12:02:23.006: INFO: stdout: "true" Mar 21 12:02:23.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbqxn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.103: INFO: stderr: "" Mar 21 12:02:23.103: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 12:02:23.103: INFO: validating pod update-demo-nautilus-gbqxn Mar 21 12:02:23.139: INFO: got data: { "image": "nautilus.jpg" } Mar 21 12:02:23.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 12:02:23.139: INFO: update-demo-nautilus-gbqxn is verified up and running Mar 21 12:02:23.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.225: INFO: stderr: "" Mar 21 12:02:23.225: INFO: stdout: "true" Mar 21 12:02:23.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhr5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.318: INFO: stderr: "" Mar 21 12:02:23.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 21 12:02:23.318: INFO: validating pod update-demo-nautilus-jhr5q Mar 21 12:02:23.322: INFO: got data: { "image": "nautilus.jpg" } Mar 21 12:02:23.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 12:02:23.322: INFO: update-demo-nautilus-jhr5q is verified up and running STEP: using delete to clean up resources Mar 21 12:02:23.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.437: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 12:02:23.437: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 21 12:02:23.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-btl65' Mar 21 12:02:23.544: INFO: stderr: "No resources found.\n" Mar 21 12:02:23.544: INFO: stdout: "" Mar 21 12:02:23.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-btl65 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 12:02:23.658: INFO: stderr: "" Mar 21 12:02:23.658: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:02:23.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-btl65" for this suite. Mar 21 12:02:45.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:02:45.778: INFO: namespace: e2e-tests-kubectl-btl65, resource: bindings, ignored listing per whitelist Mar 21 12:02:45.818: INFO: namespace e2e-tests-kubectl-btl65 deletion completed in 22.123597534s • [SLOW TEST:49.174 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:02:45.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 21 12:02:45.943: INFO: Waiting up to 5m0s for pod "client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-containers-lj9d2" to be "success or failure" Mar 21 12:02:45.949: INFO: Pod "client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347077ms Mar 21 12:02:47.954: INFO: Pod "client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010971634s Mar 21 12:02:49.957: INFO: Pod "client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014160019s STEP: Saw pod success Mar 21 12:02:49.957: INFO: Pod "client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:02:49.959: INFO: Trying to get logs from node hunter-worker pod client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:02:50.025: INFO: Waiting for pod client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 12:02:50.033: INFO: Pod client-containers-e06a936d-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:02:50.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lj9d2" for this suite. Mar 21 12:02:56.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:02:56.073: INFO: namespace: e2e-tests-containers-lj9d2, resource: bindings, ignored listing per whitelist Mar 21 12:02:56.129: INFO: namespace e2e-tests-containers-lj9d2 deletion completed in 6.092209303s • [SLOW TEST:10.310 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:02:56.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e691bf0d-6b6b-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:02:56.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-66kvv" to be "success or failure" Mar 21 12:02:56.300: INFO: Pod "pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 61.194315ms Mar 21 12:02:58.304: INFO: Pod "pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065333931s Mar 21 12:03:00.308: INFO: Pod "pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069501954s STEP: Saw pod success Mar 21 12:03:00.308: INFO: Pod "pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:03:00.311: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 12:03:00.374: INFO: Waiting for pod pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 12:03:00.401: INFO: Pod pod-projected-configmaps-e6936be3-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:03:00.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-66kvv" for this suite. Mar 21 12:03:06.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:03:06.480: INFO: namespace: e2e-tests-projected-66kvv, resource: bindings, ignored listing per whitelist Mar 21 12:03:06.496: INFO: namespace e2e-tests-projected-66kvv deletion completed in 6.091782855s • [SLOW TEST:10.367 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:03:06.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:03:06.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-xbtcw" to be "success or failure" Mar 21 12:03:06.605: INFO: Pod "downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168872ms Mar 21 12:03:08.617: INFO: Pod "downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02255837s Mar 21 12:03:10.621: INFO: Pod "downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026559159s STEP: Saw pod success Mar 21 12:03:10.621: INFO: Pod "downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:03:10.625: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:03:10.655: INFO: Waiting for pod downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f to disappear Mar 21 12:03:10.713: INFO: Pod downwardapi-volume-ecbf190a-6b6b-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:03:10.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xbtcw" for this suite. Mar 21 12:03:16.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:03:16.773: INFO: namespace: e2e-tests-downward-api-xbtcw, resource: bindings, ignored listing per whitelist Mar 21 12:03:16.818: INFO: namespace e2e-tests-downward-api-xbtcw deletion completed in 6.101052541s • [SLOW TEST:10.321 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:03:16.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dj26s [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dj26s STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dj26s Mar 21 12:03:17.063: INFO: Found 0 stateful pods, waiting for 1 Mar 21 12:03:27.067: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 21 12:03:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 12:03:27.364: INFO: stderr: "I0321 12:03:27.222310 2698 log.go:172] (0xc000138840) (0xc00076a640) Create stream\nI0321 12:03:27.222377 2698 log.go:172] (0xc000138840) (0xc00076a640) Stream added, broadcasting: 1\nI0321 12:03:27.229717 2698 log.go:172] (0xc000138840) Reply frame received for 1\nI0321 12:03:27.229747 2698 log.go:172] (0xc000138840) (0xc000694be0) Create stream\nI0321 12:03:27.229754 2698 log.go:172] (0xc000138840) (0xc000694be0) Stream added, broadcasting: 3\nI0321 12:03:27.230733 2698 log.go:172] (0xc000138840) Reply frame received for 3\nI0321 12:03:27.230776 2698 log.go:172] (0xc000138840) (0xc000628000) Create stream\nI0321 12:03:27.230787 2698 log.go:172] (0xc000138840) (0xc000628000) Stream added, broadcasting: 5\nI0321 12:03:27.231448 2698 log.go:172] (0xc000138840) Reply frame received for 5\nI0321 12:03:27.357667 2698 log.go:172] (0xc000138840) Data frame received for 3\nI0321 12:03:27.357719 2698 log.go:172] (0xc000694be0) (3) Data frame handling\nI0321 12:03:27.357776 2698 log.go:172] (0xc000694be0) (3) Data frame sent\nI0321 12:03:27.358055 2698 log.go:172] (0xc000138840) Data frame received for 5\nI0321 12:03:27.358088 2698 log.go:172] (0xc000628000) (5) Data frame handling\nI0321 12:03:27.358131 2698 log.go:172] (0xc000138840) Data frame received for 3\nI0321 12:03:27.358158 2698 log.go:172] (0xc000694be0) (3) Data frame handling\nI0321 12:03:27.359672 2698 log.go:172] (0xc000138840) Data frame received for 1\nI0321 12:03:27.359707 2698 log.go:172] (0xc00076a640) (1) Data frame handling\nI0321 12:03:27.359738 2698 log.go:172] (0xc00076a640) (1) Data frame sent\nI0321 12:03:27.359766 2698 log.go:172] (0xc000138840) (0xc00076a640) Stream removed, broadcasting: 1\nI0321 12:03:27.359788 2698 log.go:172] (0xc000138840) Go away received\nI0321 12:03:27.360037 2698 log.go:172] (0xc000138840) (0xc00076a640) Stream removed, broadcasting: 1\nI0321 12:03:27.360061 2698 log.go:172] (0xc000138840) (0xc000694be0) Stream removed, broadcasting: 3\nI0321 12:03:27.360078 2698 log.go:172] (0xc000138840) (0xc000628000) Stream removed, broadcasting: 5\n" Mar 21 12:03:27.364: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 12:03:27.364: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 12:03:27.367: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 21 12:03:37.372: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 12:03:37.372: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 12:03:37.408: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:03:37.408: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:03:37.408: INFO: Mar 21 12:03:37.408: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 21 12:03:38.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974651035s Mar 21 12:03:39.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967797142s Mar 21 12:03:40.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963446481s Mar 21 12:03:41.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959926737s Mar 21 12:03:42.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954572194s Mar 21 12:03:43.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949506528s Mar 21 12:03:44.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944468318s Mar 21 12:03:45.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.939100618s Mar 21 12:03:46.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 933.019051ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dj26s Mar 21 12:03:47.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:03:47.667: INFO: stderr: "I0321 12:03:47.589524 2720 log.go:172] (0xc0008442c0) (0xc000734640) Create stream\nI0321 12:03:47.589579 2720 log.go:172] (0xc0008442c0) (0xc000734640) Stream added, broadcasting: 1\nI0321 12:03:47.591759 2720 log.go:172] (0xc0008442c0) Reply frame received for 1\nI0321 12:03:47.591819 2720 log.go:172] (0xc0008442c0) (0xc0005b8f00) Create stream\nI0321 12:03:47.591842 2720 log.go:172] (0xc0008442c0) (0xc0005b8f00) Stream added, broadcasting: 3\nI0321 12:03:47.592890 2720 log.go:172] (0xc0008442c0) Reply frame received for 3\nI0321 12:03:47.592931 2720 log.go:172] (0xc0008442c0) (0xc000556000) Create stream\nI0321 12:03:47.592944 2720 log.go:172] (0xc0008442c0) (0xc000556000) Stream added, broadcasting: 5\nI0321 12:03:47.594161 2720 log.go:172] (0xc0008442c0) Reply frame received for 5\nI0321 12:03:47.661555 2720 log.go:172] (0xc0008442c0) Data frame received for 5\nI0321 12:03:47.661605 2720 log.go:172] (0xc000556000) (5) Data frame handling\nI0321 12:03:47.661633 2720 log.go:172] (0xc0008442c0) Data frame received for 3\nI0321 12:03:47.661644 2720 log.go:172] (0xc0005b8f00) (3) Data frame handling\nI0321 12:03:47.661658 2720 log.go:172] (0xc0005b8f00) (3) Data frame sent\nI0321 12:03:47.661685 2720 log.go:172] (0xc0008442c0) Data frame received for 3\nI0321 12:03:47.661697 2720 log.go:172] (0xc0005b8f00) (3) Data frame handling\nI0321 12:03:47.662995 2720 log.go:172] (0xc0008442c0) Data frame received for 1\nI0321 12:03:47.663028 2720 log.go:172] (0xc000734640) (1) Data frame handling\nI0321 12:03:47.663057 2720 log.go:172] (0xc000734640) (1) Data frame sent\nI0321 12:03:47.663088 2720 log.go:172] (0xc0008442c0) (0xc000734640) Stream removed, broadcasting: 1\nI0321 12:03:47.663123 2720 log.go:172] (0xc0008442c0) Go away received\nI0321 12:03:47.663260 2720 log.go:172] (0xc0008442c0) (0xc000734640) Stream removed, broadcasting: 1\nI0321 12:03:47.663293 2720 log.go:172] (0xc0008442c0) (0xc0005b8f00) Stream removed, broadcasting: 3\nI0321 12:03:47.663310 2720 log.go:172] (0xc0008442c0) (0xc000556000) Stream removed, broadcasting: 5\n" Mar 21 12:03:47.667: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 12:03:47.667: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 12:03:47.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:03:47.851: INFO: stderr: "I0321 12:03:47.787799 2742 log.go:172] (0xc0008462c0) (0xc0005e9360) Create stream\nI0321 12:03:47.787850 2742 log.go:172] (0xc0008462c0) (0xc0005e9360) Stream added, broadcasting: 1\nI0321 12:03:47.789998 2742 log.go:172] (0xc0008462c0) Reply frame received for 1\nI0321 12:03:47.790050 2742 log.go:172] (0xc0008462c0) (0xc0005e9400) Create stream\nI0321 12:03:47.790066 2742 log.go:172] (0xc0008462c0) (0xc0005e9400) Stream added, broadcasting: 3\nI0321 12:03:47.790839 2742 log.go:172] (0xc0008462c0) Reply frame received for 3\nI0321 12:03:47.790872 2742 log.go:172] (0xc0008462c0) (0xc0006dc000) Create stream\nI0321 12:03:47.790885 2742 log.go:172] (0xc0008462c0) (0xc0006dc000) Stream added, broadcasting: 5\nI0321 12:03:47.791718 2742 log.go:172] (0xc0008462c0) Reply frame received for 5\nI0321 12:03:47.846046 2742 log.go:172] (0xc0008462c0) Data frame received for 3\nI0321 12:03:47.846080 2742 log.go:172] (0xc0005e9400) (3) Data frame handling\nI0321 12:03:47.846100 2742 log.go:172] (0xc0005e9400) (3) Data frame sent\nI0321 12:03:47.846106 2742 log.go:172] (0xc0008462c0) Data frame received for 3\nI0321 12:03:47.846111 2742 log.go:172] (0xc0005e9400) (3) Data frame handling\nI0321 12:03:47.846121 2742 log.go:172] (0xc0008462c0) Data frame received for 5\nI0321 12:03:47.846128 2742 log.go:172] (0xc0006dc000) (5) Data frame handling\nI0321 12:03:47.846143 2742 log.go:172] (0xc0006dc000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0321 12:03:47.846189 2742 log.go:172] (0xc0008462c0) Data frame received for 5\nI0321 12:03:47.846202 2742 log.go:172] (0xc0006dc000) (5) Data frame handling\nI0321 12:03:47.847834 2742 log.go:172] (0xc0008462c0) Data frame received for 1\nI0321 12:03:47.847853 2742 log.go:172] (0xc0005e9360) (1) Data frame handling\nI0321 12:03:47.847864 2742 log.go:172] (0xc0005e9360) (1) Data frame sent\nI0321 12:03:47.847875 2742 log.go:172] (0xc0008462c0) (0xc0005e9360) Stream removed, broadcasting: 1\nI0321 12:03:47.847886 2742 log.go:172] (0xc0008462c0) Go away received\nI0321 12:03:47.848160 2742 log.go:172] (0xc0008462c0) (0xc0005e9360) Stream removed, broadcasting: 1\nI0321 12:03:47.848186 2742 log.go:172] (0xc0008462c0) (0xc0005e9400) Stream removed, broadcasting: 3\nI0321 12:03:47.848201 2742 log.go:172] (0xc0008462c0) (0xc0006dc000) Stream removed, broadcasting: 5\n" Mar 21 12:03:47.851: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 12:03:47.851: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 12:03:47.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:03:48.039: INFO: stderr: "I0321 12:03:47.980905 2765 log.go:172] (0xc0007f42c0) (0xc000706640) Create stream\nI0321 12:03:47.980971 2765 log.go:172] (0xc0007f42c0) (0xc000706640) Stream added, broadcasting: 1\nI0321 12:03:47.983328 2765 log.go:172] (0xc0007f42c0) Reply frame received for 1\nI0321 12:03:47.983373 2765 log.go:172] (0xc0007f42c0) (0xc0001b8c80) Create stream\nI0321 12:03:47.983381 2765 log.go:172] (0xc0007f42c0) (0xc0001b8c80) Stream added, broadcasting: 3\nI0321 12:03:47.984261 2765 log.go:172] (0xc0007f42c0) Reply frame received for 3\nI0321 12:03:47.984303 2765 log.go:172] (0xc0007f42c0) (0xc000370000) Create stream\nI0321 12:03:47.984315 2765 log.go:172] (0xc0007f42c0) (0xc000370000) Stream added, broadcasting: 5\nI0321 12:03:47.985355 2765 log.go:172] (0xc0007f42c0) Reply frame received for 5\nI0321 12:03:48.033786 2765 log.go:172] (0xc0007f42c0) Data frame received for 5\nI0321 12:03:48.033839 2765 log.go:172] (0xc000370000) (5) Data frame handling\nI0321 12:03:48.033853 2765 log.go:172] (0xc000370000) (5) Data frame sent\nI0321 12:03:48.033865 2765 log.go:172] (0xc0007f42c0) Data frame received for 5\nI0321 12:03:48.033876 2765 log.go:172] (0xc000370000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0321 12:03:48.033921 2765 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0321 12:03:48.033951 2765 log.go:172] (0xc0001b8c80) (3) Data frame handling\nI0321 12:03:48.033969 2765 log.go:172] (0xc0001b8c80) (3) Data frame sent\nI0321 12:03:48.033979 2765 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0321 12:03:48.033988 2765 log.go:172] (0xc0001b8c80) (3) Data frame handling\nI0321 12:03:48.035224 2765 log.go:172] (0xc0007f42c0) Data frame received for 1\nI0321 12:03:48.035242 2765 log.go:172] (0xc000706640) (1) Data frame handling\nI0321 12:03:48.035256 2765 log.go:172] (0xc000706640) (1) Data frame sent\nI0321 12:03:48.035271 2765 log.go:172] (0xc0007f42c0) (0xc000706640) Stream removed, broadcasting: 1\nI0321 12:03:48.035295 2765 log.go:172] (0xc0007f42c0) Go away received\nI0321 12:03:48.035494 2765 log.go:172] (0xc0007f42c0) (0xc000706640) Stream removed, broadcasting: 1\nI0321 12:03:48.035517 2765 log.go:172] (0xc0007f42c0) (0xc0001b8c80) Stream removed, broadcasting: 3\nI0321 12:03:48.035530 2765 log.go:172] (0xc0007f42c0) (0xc000370000) Stream removed, broadcasting: 5\n" Mar 21 12:03:48.039: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 21 12:03:48.039: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 21 12:03:48.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 12:03:48.047: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 12:03:48.047: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 21 12:03:48.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 12:03:48.269: INFO: stderr: "I0321 12:03:48.190966 2788 log.go:172] (0xc00014c840) (0xc00061b4a0) Create stream\nI0321 12:03:48.191021 2788 log.go:172] (0xc00014c840) (0xc00061b4a0) Stream added, broadcasting: 1\nI0321 12:03:48.193354 2788 log.go:172] (0xc00014c840) Reply frame received for 1\nI0321 12:03:48.193417 2788 log.go:172] (0xc00014c840) (0xc00061b540) Create stream\nI0321 12:03:48.193434 2788 log.go:172] (0xc00014c840) (0xc00061b540) Stream added, broadcasting: 3\nI0321 12:03:48.194153 2788 log.go:172] (0xc00014c840) Reply frame received for 3\nI0321 12:03:48.194180 2788 log.go:172] (0xc00014c840) (0xc00061b5e0) Create stream\nI0321 12:03:48.194188 2788 log.go:172] (0xc00014c840) (0xc00061b5e0) Stream added, broadcasting: 5\nI0321 12:03:48.194861 2788 log.go:172] (0xc00014c840) Reply frame received for 5\nI0321 12:03:48.263237 2788 log.go:172] (0xc00014c840) Data frame received for 5\nI0321 12:03:48.263277 2788 log.go:172] (0xc00061b5e0) (5) Data frame handling\nI0321 12:03:48.263314 2788 log.go:172] (0xc00014c840) Data frame received for 3\nI0321 12:03:48.263339 2788 log.go:172] (0xc00061b540) (3) Data frame handling\nI0321 12:03:48.263353 2788 log.go:172] (0xc00061b540) (3) Data frame sent\nI0321 12:03:48.263378 2788 log.go:172] (0xc00014c840) Data frame received for 3\nI0321 12:03:48.263399 2788 log.go:172] (0xc00061b540) (3) Data frame handling\nI0321 12:03:48.264888 2788 log.go:172] (0xc00014c840) Data frame received for 1\nI0321 12:03:48.264923 2788 log.go:172] (0xc00061b4a0) (1) Data frame handling\nI0321 12:03:48.264947 2788 log.go:172] (0xc00061b4a0) (1) Data frame sent\nI0321 12:03:48.264969 2788 log.go:172] (0xc00014c840) (0xc00061b4a0) Stream removed, broadcasting: 1\nI0321 12:03:48.265001 2788 log.go:172] (0xc00014c840) Go away received\nI0321 12:03:48.265497 2788 log.go:172] (0xc00014c840) (0xc00061b4a0) Stream removed, broadcasting: 1\nI0321 12:03:48.265523 2788 log.go:172] (0xc00014c840) (0xc00061b540) Stream removed, broadcasting: 3\nI0321 12:03:48.265541 2788 log.go:172] (0xc00014c840) (0xc00061b5e0) Stream removed, broadcasting: 5\n" Mar 21 12:03:48.269: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 12:03:48.269: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 12:03:48.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 12:03:48.508: INFO: stderr: "I0321 12:03:48.403074 2811 log.go:172] (0xc000162840) (0xc0007a8640) Create stream\nI0321 12:03:48.403137 2811 log.go:172] (0xc000162840) (0xc0007a8640) Stream added, broadcasting: 1\nI0321 12:03:48.409531 2811 log.go:172] (0xc000162840) Reply frame received for 1\nI0321 12:03:48.409589 2811 log.go:172] (0xc000162840) (0xc000664c80) Create stream\nI0321 12:03:48.409610 2811 log.go:172] (0xc000162840) (0xc000664c80) Stream added, broadcasting: 3\nI0321 12:03:48.410403 2811 log.go:172] (0xc000162840) Reply frame received for 3\nI0321 12:03:48.410440 2811 log.go:172] (0xc000162840) (0xc0007b6000) Create stream\nI0321 12:03:48.410456 2811 log.go:172] (0xc000162840) (0xc0007b6000) Stream added, broadcasting: 5\nI0321 12:03:48.411103 2811 log.go:172] (0xc000162840) Reply frame received for 5\nI0321 12:03:48.501981 2811 log.go:172] (0xc000162840) Data frame received for 3\nI0321 12:03:48.502013 2811 log.go:172] (0xc000664c80) (3) Data frame handling\nI0321 12:03:48.502037 2811 log.go:172] (0xc000664c80) (3) Data frame sent\nI0321 12:03:48.502063 2811 log.go:172] (0xc000162840) Data frame received for 3\nI0321 12:03:48.502085 2811 log.go:172] (0xc000664c80) (3) Data frame handling\nI0321 12:03:48.502161 2811 log.go:172] (0xc000162840) Data frame received for 5\nI0321 12:03:48.502200 2811 log.go:172] (0xc0007b6000) (5) Data frame handling\nI0321 12:03:48.504153 2811 log.go:172] (0xc000162840) Data frame received for 1\nI0321 12:03:48.504186 2811 log.go:172] (0xc0007a8640) (1) Data frame handling\nI0321 12:03:48.504195 2811 log.go:172] (0xc0007a8640) (1) Data frame sent\nI0321 12:03:48.504210 2811 log.go:172] (0xc000162840) (0xc0007a8640) Stream removed, broadcasting: 1\nI0321 12:03:48.504284 2811 log.go:172] (0xc000162840) Go away received\nI0321 12:03:48.504388 2811 log.go:172] (0xc000162840) (0xc0007a8640) Stream removed, broadcasting: 1\nI0321 12:03:48.504402 2811 log.go:172] (0xc000162840) (0xc000664c80) Stream removed, broadcasting: 3\nI0321 12:03:48.504415 2811 log.go:172] (0xc000162840) (0xc0007b6000) Stream removed, broadcasting: 5\n" Mar 21 12:03:48.508: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 12:03:48.508: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 12:03:48.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 21 12:03:48.730: INFO: stderr: "I0321 12:03:48.627944 2834 log.go:172] (0xc0008642c0) (0xc000720640) Create stream\nI0321 12:03:48.628003 2834 log.go:172] (0xc0008642c0) (0xc000720640) Stream added, broadcasting: 1\nI0321 12:03:48.631035 2834 log.go:172] (0xc0008642c0) Reply frame received for 1\nI0321 12:03:48.631110 2834 log.go:172] (0xc0008642c0) (0xc0007a8dc0) Create stream\nI0321 12:03:48.631149 2834 log.go:172] (0xc0008642c0) (0xc0007a8dc0) Stream added, broadcasting: 3\nI0321 12:03:48.632159 2834 log.go:172] (0xc0008642c0) Reply frame received for 3\nI0321 12:03:48.632198 2834 log.go:172] (0xc0008642c0) (0xc0007206e0) Create stream\nI0321 12:03:48.632213 2834 log.go:172] (0xc0008642c0) (0xc0007206e0) Stream added, broadcasting: 5\nI0321 12:03:48.633354 2834 log.go:172] (0xc0008642c0) Reply frame received for 5\nI0321 12:03:48.724356 2834 log.go:172] (0xc0008642c0) Data frame received for 3\nI0321 12:03:48.724403 2834 log.go:172] (0xc0007a8dc0) (3) Data frame handling\nI0321 12:03:48.724435 2834 log.go:172] (0xc0007a8dc0) (3) Data frame sent\nI0321 12:03:48.724452 2834 log.go:172] (0xc0008642c0) Data frame received for 3\nI0321 12:03:48.724466 2834 log.go:172] (0xc0007a8dc0) (3) Data frame handling\nI0321 12:03:48.724525 2834 log.go:172] (0xc0008642c0) Data frame received for 5\nI0321 12:03:48.724558 2834 log.go:172] (0xc0007206e0) (5) Data frame handling\nI0321 12:03:48.726744 2834 log.go:172] (0xc0008642c0) Data frame received for 1\nI0321 12:03:48.726761 2834 log.go:172] (0xc000720640) (1) Data frame handling\nI0321 12:03:48.726768 2834 log.go:172] (0xc000720640) (1) Data frame sent\nI0321 12:03:48.726784 2834 log.go:172] (0xc0008642c0) (0xc000720640) Stream removed, broadcasting: 1\nI0321 12:03:48.726797 2834 log.go:172] (0xc0008642c0) Go away received\nI0321 12:03:48.727102 2834 log.go:172] (0xc0008642c0) (0xc000720640) Stream removed, broadcasting: 1\nI0321 12:03:48.727129 2834 log.go:172] (0xc0008642c0) (0xc0007a8dc0) Stream removed, broadcasting: 3\nI0321 12:03:48.727142 2834 log.go:172] (0xc0008642c0) (0xc0007206e0) Stream removed, broadcasting: 5\n" Mar 21 12:03:48.730: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 21 12:03:48.730: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 21 12:03:48.730: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 12:03:48.750: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 21 12:03:58.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 12:03:58.757: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 21 12:03:58.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 21 12:03:58.769: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:03:58.769: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:03:58.769: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:03:58.769: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:03:58.769: INFO: Mar 21 12:03:58.769: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 12:03:59.864: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:03:59.864: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:03:59.864: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:03:59.864: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:03:59.864: INFO: Mar 21 12:03:59.864: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 12:04:00.882: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:00.883: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:00.883: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:00.883: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:00.883: INFO: Mar 21 12:04:00.883: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 12:04:01.887: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:01.887: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:01.887: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:01.887: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:01.887: INFO: Mar 21 12:04:01.887: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 21 12:04:02.892: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:02.892: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:02.892: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:02.892: INFO: Mar 21 12:04:02.892: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 21 12:04:03.896: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:03.896: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:03.896: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:03.896: INFO: Mar 21 12:04:03.896: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 21 12:04:04.901: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:04.902: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:04.902: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:04.902: INFO: Mar 21 12:04:04.902: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 21 12:04:05.906: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:05.906: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:05.907: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:05.907: INFO: Mar 21 12:04:05.907: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 21 12:04:06.911: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:06.912: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:06.912: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:06.912: INFO: Mar 21 12:04:06.912: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 21 12:04:07.916: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 12:04:07.916: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:17 +0000 UTC }] Mar 21 12:04:07.916: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:03:37 +0000 UTC }] Mar 21 12:04:07.916: INFO: Mar 21 12:04:07.916: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dj26s Mar 21 12:04:08.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:09.056: INFO: rc: 1 Mar 21 12:04:09.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000d0c3c0 exit status 1 true [0xc001e8ebb8 0xc001e8ebd0 0xc001e8ebe8] [0xc001e8ebb8 0xc001e8ebd0 0xc001e8ebe8] [0xc001e8ebc8 0xc001e8ebe0] [0x935700 0x935700] 0xc0023fcde0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 21 12:04:19.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:19.145: INFO: rc: 1 Mar 21 12:04:19.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b8120 exit status 1 true [0xc00000e2a8 0xc00044a150 0xc00044a250] [0xc00000e2a8 0xc00044a150 0xc00044a250] [0xc00044a090 0xc00044a218] [0x935700 0x935700] 0xc001e48de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:04:29.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:29.236: INFO: rc: 1 Mar 21 12:04:29.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b8240 exit status 1 true [0xc00044a278 0xc00044a2c8 0xc00044a318] [0xc00044a278 0xc00044a2c8 0xc00044a318] [0xc00044a2a0 0xc00044a308] [0x935700 0x935700] 0xc001e49260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:04:39.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:39.336: INFO: rc: 1 Mar 21 12:04:39.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b8390 exit status 1 true [0xc00044a398 0xc00044a488 0xc00044a4e0] [0xc00044a398 0xc00044a488 0xc00044a4e0] [0xc00044a458 0xc00044a4c0] [0x935700 0x935700] 0xc001e49500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:04:49.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:49.419: INFO: rc: 1 Mar 21 12:04:49.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e2120 exit status 1 true [0xc0024b2000 0xc0024b2018 0xc0024b2030] [0xc0024b2000 0xc0024b2018 0xc0024b2030] [0xc0024b2010 0xc0024b2028] [0x935700 0x935700] 0xc0026c2540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:04:59.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:04:59.508: INFO: rc: 1 Mar 21 12:04:59.508: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c150 exit status 1 true [0xc002348000 0xc002348018 0xc002348030] [0xc002348000 0xc002348018 0xc002348030] [0xc002348010 0xc002348028] [0x935700 0x935700] 0xc0024941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:09.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:05:09.597: INFO: rc: 1 Mar 21 12:05:09.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c270 exit status 1 true [0xc002348038 0xc002348050 0xc002348068] [0xc002348038 0xc002348050 0xc002348068] [0xc002348048 0xc002348060] [0x935700 0x935700] 0xc002494480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:19.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:05:19.686: INFO: rc: 1 Mar 21 12:05:19.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c180 exit status 1 true [0xc001dce000 0xc001dce018 0xc001dce030] [0xc001dce000 0xc001dce018 0xc001dce030] [0xc001dce010 0xc001dce028] [0x935700 0x935700] 0xc0021b8a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:29.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:05:29.772: INFO: rc: 1 Mar 21 12:05:29.772: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b8510 exit status 1 true [0xc00044a500 0xc00044a530 0xc00044a578] [0xc00044a500 0xc00044a530 0xc00044a578] [0xc00044a528 0xc00044a558] [0x935700 0x935700] 0xc001e497a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:39.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:05:39.858: INFO: rc: 1 Mar 21 12:05:39.858: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e22d0 exit status 1 true [0xc0024b2038 0xc0024b2050 0xc0024b2068] [0xc0024b2038 0xc0024b2050 0xc0024b2068] [0xc0024b2048 0xc0024b2060] [0x935700 0x935700] 0xc0026c27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:49.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:05:49.962: INFO: rc: 1 Mar 21 12:05:49.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c390 exit status 1 true [0xc002348070 0xc002348088 0xc0023480a0] [0xc002348070 0xc002348088 0xc0023480a0] [0xc002348080 0xc002348098] [0x935700 0x935700] 0xc002494720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:05:59.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:00.050: INFO: rc: 1 Mar 21 12:06:00.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c4e0 exit status 1 true [0xc0023480a8 0xc0023480c0 0xc0023480d8] [0xc0023480a8 0xc0023480c0 0xc0023480d8] [0xc0023480b8 0xc0023480d0] [0x935700 0x935700] 0xc0024949c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:06:10.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:10.155: INFO: rc: 1 Mar 21 12:06:10.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234cf30 exit status 1 true [0xc0023480e0 0xc0023480f8 0xc002348110] [0xc0023480e0 0xc0023480f8 0xc002348110] [0xc0023480f0 0xc002348108] [0x935700 0x935700] 0xc002494c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:06:20.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:20.234: INFO: rc: 1 Mar 21 12:06:20.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b8150 exit status 1 true [0xc00000e278 0xc00044a150 0xc00044a250] [0xc00000e278 0xc00044a150 0xc00044a250] [0xc00044a090 0xc00044a218] [0x935700 0x935700] 0xc001e48de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:06:30.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:30.331: INFO: rc: 1 Mar 21 12:06:30.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c120 exit status 1 true [0xc0024b2000 0xc0024b2018 0xc0024b2030] [0xc0024b2000 0xc0024b2018 0xc0024b2030] [0xc0024b2010 0xc0024b2028] [0x935700 0x935700] 0xc0026c2540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:06:40.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:40.426: INFO: rc: 1 Mar 21 12:06:40.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c2a0 exit status 1 true [0xc0024b2038 0xc0024b2050 0xc0024b2068] [0xc0024b2038 0xc0024b2050 0xc0024b2068] [0xc0024b2048 0xc0024b2060] [0x935700 0x935700] 0xc0026c27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:06:50.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:06:50.517: INFO: rc: 1 Mar 21 12:06:50.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c3f0 exit status 1 true [0xc0024b2070 0xc0024b2088 0xc0024b20a0] [0xc0024b2070 0xc0024b2088 0xc0024b20a0] [0xc0024b2080 0xc0024b2098] [0x935700 0x935700] 0xc0026c2a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:00.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:00.621: INFO: rc: 1 Mar 21 12:07:00.621: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c1b0 exit status 1 true [0xc002348000 0xc002348018 0xc002348030] [0xc002348000 0xc002348018 0xc002348030] [0xc002348010 0xc002348028] [0x935700 0x935700] 0xc0024941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:10.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:10.707: INFO: rc: 1 Mar 21 12:07:10.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c300 exit status 1 true [0xc002348038 0xc002348050 0xc002348068] [0xc002348038 0xc002348050 0xc002348068] [0xc002348048 0xc002348060] [0x935700 0x935700] 0xc002494480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:20.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:20.792: INFO: rc: 1 Mar 21 12:07:20.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c450 exit status 1 true [0xc002348070 0xc002348088 0xc0023480a0] [0xc002348070 0xc002348088 0xc0023480a0] [0xc002348080 0xc002348098] [0x935700 0x935700] 0xc002494720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:30.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:30.896: INFO: rc: 1 Mar 21 12:07:30.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c5a0 exit status 1 true [0xc0023480a8 0xc0023480c0 0xc0023480d8] [0xc0023480a8 0xc0023480c0 0xc0023480d8] [0xc0023480b8 0xc0023480d0] [0x935700 0x935700] 0xc0024949c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:40.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:40.987: INFO: rc: 1 Mar 21 12:07:40.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c6f0 exit status 1 true [0xc0023480e0 0xc0023480f8 0xc002348110] [0xc0023480e0 0xc0023480f8 0xc002348110] [0xc0023480f0 0xc002348108] [0x935700 0x935700] 0xc002494c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:07:50.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:07:51.085: INFO: rc: 1 Mar 21 12:07:51.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011b82a0 exit status 1 true [0xc00044a278 0xc00044a2c8 0xc00044a318] [0xc00044a278 0xc00044a2c8 0xc00044a318] [0xc00044a2a0 0xc00044a308] [0x935700 0x935700] 0xc001e49260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:01.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:01.171: INFO: rc: 1 Mar 21 12:08:01.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c870 exit status 1 true [0xc002348118 0xc002348130 0xc002348148] [0xc002348118 0xc002348130 0xc002348148] [0xc002348128 0xc002348140] [0x935700 0x935700] 0xc002494f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:11.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:11.265: INFO: rc: 1 Mar 21 12:08:11.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00234c540 exit status 1 true [0xc0024b20a8 0xc0024b20c0 0xc0024b20d8] [0xc0024b20a8 0xc0024b20c0 0xc0024b20d8] [0xc0024b20b8 0xc0024b20d0] [0x935700 0x935700] 0xc0026c2d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:21.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:21.351: INFO: rc: 1 Mar 21 12:08:21.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c120 exit status 1 true [0xc00000e2a8 0xc002348008 0xc002348020] [0xc00000e2a8 0xc002348008 0xc002348020] [0xc002348000 0xc002348018] [0x935700 0x935700] 0xc0024941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:31.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:31.447: INFO: rc: 1 Mar 21 12:08:31.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c2d0 exit status 1 true [0xc002348028 0xc002348040 0xc002348058] [0xc002348028 0xc002348040 0xc002348058] [0xc002348038 0xc002348050] [0x935700 0x935700] 0xc002494480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:41.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:41.537: INFO: rc: 1 Mar 21 12:08:41.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e9c480 exit status 1 true [0xc002348060 0xc002348078 0xc002348090] [0xc002348060 0xc002348078 0xc002348090] [0xc002348070 0xc002348088] [0x935700 0x935700] 0xc002494720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:08:51.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:08:51.623: INFO: rc: 1 Mar 21 12:08:51.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e2120 exit status 1 true [0xc00044a090 0xc00044a218 0xc00044a290] [0xc00044a090 0xc00044a218 0xc00044a290] [0xc00044a178 0xc00044a278] [0x935700 0x935700] 0xc001e48de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:09:01.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:09:01.712: INFO: rc: 1 Mar 21 12:09:01.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e22a0 exit status 1 true [0xc00044a2a0 0xc00044a308 0xc00044a420] [0xc00044a2a0 0xc00044a308 0xc00044a420] [0xc00044a2e0 0xc00044a398] [0x935700 0x935700] 0xc001e49260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 21 12:09:11.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dj26s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 21 12:09:11.804: INFO: rc: 1 Mar 21 12:09:11.804: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 21 12:09:11.804: INFO: Scaling statefulset ss to 0 Mar 21 12:09:11.815: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 21 12:09:11.817: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dj26s Mar 21 12:09:11.820: INFO: Scaling statefulset ss to 0 Mar 21 12:09:11.827: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 12:09:11.829: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:09:11.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dj26s" for this suite. Mar 21 12:09:17.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:09:17.922: INFO: namespace: e2e-tests-statefulset-dj26s, resource: bindings, ignored listing per whitelist Mar 21 12:09:17.973: INFO: namespace e2e-tests-statefulset-dj26s deletion completed in 6.102660253s • [SLOW TEST:361.155 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:09:17.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:09:18.121: INFO: Creating deployment "test-recreate-deployment" Mar 21 12:09:18.123: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 21 12:09:18.130: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 21 12:09:20.137: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 21 12:09:20.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720389358, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720389358, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720389358, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720389358, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:09:22.144: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 21 12:09:22.151: INFO: Updating deployment test-recreate-deployment Mar 21 12:09:22.151: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 21 12:09:22.620: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-x6zrw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x6zrw/deployments/test-recreate-deployment,UID:ca334b32-6b6c-11ea-99e8-0242ac110002,ResourceVersion:1021994,Generation:2,CreationTimestamp:2020-03-21 12:09:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-21 12:09:22 +0000 UTC 2020-03-21 12:09:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-21 12:09:22 +0000 UTC 2020-03-21 12:09:18 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 21 12:09:22.669: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-x6zrw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x6zrw/replicasets/test-recreate-deployment-589c4bfd,UID:ccae0532-6b6c-11ea-99e8-0242ac110002,ResourceVersion:1021992,Generation:1,CreationTimestamp:2020-03-21 12:09:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ca334b32-6b6c-11ea-99e8-0242ac110002 0xc00107499f 0xc001074aa0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 12:09:22.669: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 21 12:09:22.669: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-x6zrw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x6zrw/replicasets/test-recreate-deployment-5bf7f65dc,UID:ca34a80a-6b6c-11ea-99e8-0242ac110002,ResourceVersion:1021982,Generation:2,CreationTimestamp:2020-03-21 12:09:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ca334b32-6b6c-11ea-99e8-0242ac110002 0xc001074b80 0xc001074b81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 12:09:22.673: INFO: Pod "test-recreate-deployment-589c4bfd-2khlm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2khlm,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-x6zrw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x6zrw/pods/test-recreate-deployment-589c4bfd-2khlm,UID:ccb104b6-6b6c-11ea-99e8-0242ac110002,ResourceVersion:1021993,Generation:0,CreationTimestamp:2020-03-21 12:09:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd ccae0532-6b6c-11ea-99e8-0242ac110002 0xc001ed45ff 0xc001ed4610}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-75gzf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-75gzf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-75gzf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ed4680} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ed46a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:09:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:09:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:09:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:09:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-21 12:09:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:09:22.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x6zrw" for this suite. Mar 21 12:09:28.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:09:28.752: INFO: namespace: e2e-tests-deployment-x6zrw, resource: bindings, ignored listing per whitelist Mar 21 12:09:28.822: INFO: namespace e2e-tests-deployment-x6zrw deletion completed in 6.145058186s • [SLOW TEST:10.848 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:09:28.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 21 12:09:28.930: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 12:09:28.943: INFO: Waiting for terminating namespaces to be deleted... Mar 21 12:09:28.944: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 21 12:09:28.950: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.950: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 12:09:28.950: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.950: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 12:09:28.950: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.950: INFO: Container coredns ready: true, restart count 0 Mar 21 12:09:28.950: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 21 12:09:28.957: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.957: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 12:09:28.957: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.957: INFO: Container coredns ready: true, restart count 0 Mar 21 12:09:28.957: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:09:28.957: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 21 12:09:29.052: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Mar 21 12:09:29.053: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Mar 21 12:09:29.053: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Mar 21 12:09:29.053: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Mar 21 12:09:29.053: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Mar 21 12:09:29.053: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b74049-6b6c-11ea-946c-0242ac11000f.15fe5071ea651e96], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-fgbw6/filler-pod-d0b74049-6b6c-11ea-946c-0242ac11000f to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b74049-6b6c-11ea-946c-0242ac11000f.15fe507231dd1203], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b74049-6b6c-11ea-946c-0242ac11000f.15fe50726431fc90], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b74049-6b6c-11ea-946c-0242ac11000f.15fe507281b4a97a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b95d92-6b6c-11ea-946c-0242ac11000f.15fe5071ebd45b5c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-fgbw6/filler-pod-d0b95d92-6b6c-11ea-946c-0242ac11000f to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b95d92-6b6c-11ea-946c-0242ac11000f.15fe507269a68c8f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b95d92-6b6c-11ea-946c-0242ac11000f.15fe507298d20d27], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0b95d92-6b6c-11ea-946c-0242ac11000f.15fe5072a723eef5], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fe5072db3cfa41], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:09:34.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-fgbw6" for this suite. Mar 21 12:09:40.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:09:40.450: INFO: namespace: e2e-tests-sched-pred-fgbw6, resource: bindings, ignored listing per whitelist Mar 21 12:09:40.460: INFO: namespace e2e-tests-sched-pred-fgbw6 deletion completed in 6.216379118s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.638 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:09:40.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:09:40.566: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 21 12:09:40.573: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:40.575: INFO: Number of nodes with available pods: 0 Mar 21 12:09:40.575: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:09:41.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:41.584: INFO: Number of nodes with available pods: 0 Mar 21 12:09:41.584: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:09:42.599: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:42.602: INFO: Number of nodes with available pods: 0 Mar 21 12:09:42.602: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:09:43.580: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:43.583: INFO: Number of nodes with available pods: 0 Mar 21 12:09:43.583: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:09:44.581: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:44.584: INFO: Number of nodes with available pods: 2 Mar 21 12:09:44.584: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 21 12:09:44.659: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:44.659: INFO: Wrong image for pod: daemon-set-z94tj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:44.670: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:45.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:45.674: INFO: Wrong image for pod: daemon-set-z94tj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:45.677: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:46.691: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:46.691: INFO: Wrong image for pod: daemon-set-z94tj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:46.691: INFO: Pod daemon-set-z94tj is not available Mar 21 12:09:46.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:47.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:47.674: INFO: Wrong image for pod: daemon-set-z94tj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:47.674: INFO: Pod daemon-set-z94tj is not available Mar 21 12:09:47.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:48.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:48.674: INFO: Pod daemon-set-rlxmx is not available Mar 21 12:09:48.677: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:49.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:49.674: INFO: Pod daemon-set-rlxmx is not available Mar 21 12:09:49.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:50.690: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:50.694: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:51.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:51.677: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:52.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:52.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:52.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:53.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:53.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:53.676: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:54.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:54.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:54.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:55.708: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:55.708: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:55.712: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:56.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:56.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:56.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:57.713: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:57.713: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:57.716: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:58.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:58.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:58.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:09:59.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:09:59.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:09:59.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:00.674: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:10:00.674: INFO: Pod daemon-set-njdjg is not available Mar 21 12:10:00.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:01.677: INFO: Wrong image for pod: daemon-set-njdjg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 21 12:10:01.677: INFO: Pod daemon-set-njdjg is not available Mar 21 12:10:01.682: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:02.674: INFO: Pod daemon-set-rvd4q is not available Mar 21 12:10:02.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 21 12:10:02.682: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:02.685: INFO: Number of nodes with available pods: 1 Mar 21 12:10:02.685: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:10:03.690: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:03.692: INFO: Number of nodes with available pods: 1 Mar 21 12:10:03.692: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:10:04.691: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:04.695: INFO: Number of nodes with available pods: 1 Mar 21 12:10:04.695: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:10:05.690: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:10:05.693: INFO: Number of nodes with available pods: 2 Mar 21 12:10:05.693: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-th8qj, will wait for the garbage collector to delete the pods Mar 21 12:10:05.789: INFO: Deleting DaemonSet.extensions daemon-set took: 27.762571ms Mar 21 12:10:05.889: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.311475ms Mar 21 12:10:10.292: INFO: Number of nodes with available pods: 0 Mar 21 12:10:10.292: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 12:10:10.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-th8qj/daemonsets","resourceVersion":"1022241"},"items":null} Mar 21 12:10:10.297: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-th8qj/pods","resourceVersion":"1022241"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:10:10.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-th8qj" for this suite. Mar 21 12:10:16.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:10:16.405: INFO: namespace: e2e-tests-daemonsets-th8qj, resource: bindings, ignored listing per whitelist Mar 21 12:10:16.407: INFO: namespace e2e-tests-daemonsets-th8qj deletion completed in 6.093708378s • [SLOW TEST:35.947 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:10:16.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 21 12:10:24.621: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:24.634: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:26.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:26.639: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:28.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:28.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:30.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:30.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:32.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:32.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:34.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:34.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:36.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:36.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:38.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:38.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:40.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:40.638: INFO: Pod pod-with-prestop-exec-hook still exists Mar 21 12:10:42.634: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 21 12:10:42.638: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:10:42.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jtbnj" for this suite. Mar 21 12:11:04.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:11:04.674: INFO: namespace: e2e-tests-container-lifecycle-hook-jtbnj, resource: bindings, ignored listing per whitelist Mar 21 12:11:04.744: INFO: namespace e2e-tests-container-lifecycle-hook-jtbnj deletion completed in 22.095507249s • [SLOW TEST:48.337 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:11:04.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 21 12:11:04.877: INFO: Waiting up to 5m0s for pod "pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-s28bx" to be "success or failure" Mar 21 12:11:04.898: INFO: Pod "pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.480811ms Mar 21 12:11:06.902: INFO: Pod "pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025470039s Mar 21 12:11:08.907: INFO: Pod "pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030074326s STEP: Saw pod success Mar 21 12:11:08.907: INFO: Pod "pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:11:08.910: INFO: Trying to get logs from node hunter-worker pod pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:11:08.973: INFO: Waiting for pod pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:11:08.978: INFO: Pod pod-09d1b4c6-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:11:08.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s28bx" for this suite. Mar 21 12:11:14.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:11:15.029: INFO: namespace: e2e-tests-emptydir-s28bx, resource: bindings, ignored listing per whitelist Mar 21 12:11:15.090: INFO: namespace e2e-tests-emptydir-s28bx deletion completed in 6.108505645s • [SLOW TEST:10.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:11:15.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:11:15.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ftd5x" for this suite. Mar 21 12:11:21.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:11:21.301: INFO: namespace: e2e-tests-kubelet-test-ftd5x, resource: bindings, ignored listing per whitelist Mar 21 12:11:21.375: INFO: namespace e2e-tests-kubelet-test-ftd5x deletion completed in 6.099023074s • [SLOW TEST:6.284 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:11:21.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wrk5g Mar 21 12:11:25.489: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wrk5g STEP: checking the pod's current state and verifying that restartCount is present Mar 21 12:11:25.492: INFO: Initial restart count of pod liveness-http is 0 Mar 21 12:11:49.546: INFO: Restart count of pod e2e-tests-container-probe-wrk5g/liveness-http is now 1 (24.05466855s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:11:49.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wrk5g" for this suite. Mar 21 12:11:55.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:11:55.690: INFO: namespace: e2e-tests-container-probe-wrk5g, resource: bindings, ignored listing per whitelist Mar 21 12:11:55.709: INFO: namespace e2e-tests-container-probe-wrk5g deletion completed in 6.096306744s • [SLOW TEST:34.334 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:11:55.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-l7tr STEP: Creating a pod to test atomic-volume-subpath Mar 21 12:11:55.864: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l7tr" in namespace "e2e-tests-subpath-s59q8" to be "success or failure" Mar 21 12:11:55.868: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188345ms Mar 21 12:11:57.876: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012146898s Mar 21 12:11:59.882: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018032533s Mar 21 12:12:01.886: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 6.021654504s Mar 21 12:12:03.890: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 8.026182379s Mar 21 12:12:05.895: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 10.030589236s Mar 21 12:12:07.899: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 12.035332168s Mar 21 12:12:09.904: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 14.039567333s Mar 21 12:12:11.910: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 16.046186894s Mar 21 12:12:13.915: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 18.050900031s Mar 21 12:12:15.918: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 20.054358425s Mar 21 12:12:17.923: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 22.058421995s Mar 21 12:12:19.927: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Running", Reason="", readiness=false. Elapsed: 24.06323905s Mar 21 12:12:21.932: INFO: Pod "pod-subpath-test-secret-l7tr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.067455182s STEP: Saw pod success Mar 21 12:12:21.932: INFO: Pod "pod-subpath-test-secret-l7tr" satisfied condition "success or failure" Mar 21 12:12:21.934: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-l7tr container test-container-subpath-secret-l7tr: STEP: delete the pod Mar 21 12:12:21.970: INFO: Waiting for pod pod-subpath-test-secret-l7tr to disappear Mar 21 12:12:22.008: INFO: Pod pod-subpath-test-secret-l7tr no longer exists STEP: Deleting pod pod-subpath-test-secret-l7tr Mar 21 12:12:22.008: INFO: Deleting pod "pod-subpath-test-secret-l7tr" in namespace "e2e-tests-subpath-s59q8" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:12:22.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-s59q8" for this suite. Mar 21 12:12:28.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:12:28.059: INFO: namespace: e2e-tests-subpath-s59q8, resource: bindings, ignored listing per whitelist Mar 21 12:12:28.113: INFO: namespace e2e-tests-subpath-s59q8 deletion completed in 6.098800498s • [SLOW TEST:32.404 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:12:28.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:12:48.248: INFO: Container started at 2020-03-21 12:12:30 +0000 UTC, pod became ready at 2020-03-21 12:12:47 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:12:48.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lqsh5" for this suite. Mar 21 12:13:12.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:13:12.357: INFO: namespace: e2e-tests-container-probe-lqsh5, resource: bindings, ignored listing per whitelist Mar 21 12:13:12.369: INFO: namespace e2e-tests-container-probe-lqsh5 deletion completed in 24.117340799s • [SLOW TEST:44.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:13:12.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:13:12.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-jplmp" to be "success or failure" Mar 21 12:13:12.487: INFO: Pod "downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240521ms Mar 21 12:13:14.492: INFO: Pod "downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008841619s Mar 21 12:13:16.496: INFO: Pod "downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013039087s STEP: Saw pod success Mar 21 12:13:16.496: INFO: Pod "downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:13:16.499: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:13:16.579: INFO: Waiting for pod downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:13:16.588: INFO: Pod downwardapi-volume-55e1aff6-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:13:16.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jplmp" for this suite. Mar 21 12:13:22.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:13:22.674: INFO: namespace: e2e-tests-projected-jplmp, resource: bindings, ignored listing per whitelist Mar 21 12:13:22.701: INFO: namespace e2e-tests-projected-jplmp deletion completed in 6.109283188s • [SLOW TEST:10.331 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:13:22.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 21 12:13:22.840: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:22.842: INFO: Number of nodes with available pods: 0 Mar 21 12:13:22.842: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:13:23.847: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:23.849: INFO: Number of nodes with available pods: 0 Mar 21 12:13:23.849: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:13:24.861: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:24.865: INFO: Number of nodes with available pods: 0 Mar 21 12:13:24.865: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:13:25.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:25.869: INFO: Number of nodes with available pods: 0 Mar 21 12:13:25.869: INFO: Node hunter-worker is running more than one daemon pod Mar 21 12:13:26.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:26.850: INFO: Number of nodes with available pods: 2 Mar 21 12:13:26.850: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 21 12:13:26.932: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:26.938: INFO: Number of nodes with available pods: 1 Mar 21 12:13:26.938: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:13:27.943: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:27.946: INFO: Number of nodes with available pods: 1 Mar 21 12:13:27.946: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:13:28.950: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:28.953: INFO: Number of nodes with available pods: 1 Mar 21 12:13:28.953: INFO: Node hunter-worker2 is running more than one daemon pod Mar 21 12:13:30.664: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 12:13:30.675: INFO: Number of nodes with available pods: 2 Mar 21 12:13:30.675: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-h7ktd, will wait for the garbage collector to delete the pods Mar 21 12:13:30.748: INFO: Deleting DaemonSet.extensions daemon-set took: 7.509798ms Mar 21 12:13:30.848: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.388041ms Mar 21 12:13:34.666: INFO: Number of nodes with available pods: 0 Mar 21 12:13:34.666: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 12:13:34.669: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h7ktd/daemonsets","resourceVersion":"1022924"},"items":null} Mar 21 12:13:34.671: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h7ktd/pods","resourceVersion":"1022924"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:13:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-h7ktd" for this suite. Mar 21 12:13:40.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:13:40.797: INFO: namespace: e2e-tests-daemonsets-h7ktd, resource: bindings, ignored listing per whitelist Mar 21 12:13:40.816: INFO: namespace e2e-tests-daemonsets-h7ktd deletion completed in 6.133012357s • [SLOW TEST:18.115 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:13:40.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:13:40.900: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:13:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xxpcm" for this suite. Mar 21 12:14:34.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:14:34.998: INFO: namespace: e2e-tests-pods-xxpcm, resource: bindings, ignored listing per whitelist Mar 21 12:14:35.024: INFO: namespace e2e-tests-pods-xxpcm deletion completed in 50.084551501s • [SLOW TEST:54.209 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:14:35.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:14:35.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-xt2c7" to be "success or failure" Mar 21 12:14:35.147: INFO: Pod "downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431599ms Mar 21 12:14:37.151: INFO: Pod "downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008319756s Mar 21 12:14:39.155: INFO: Pod "downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012887819s STEP: Saw pod success Mar 21 12:14:39.155: INFO: Pod "downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:14:39.158: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:14:39.180: INFO: Waiting for pod downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:14:39.190: INFO: Pod downwardapi-volume-8727fd84-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:14:39.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xt2c7" for this suite. Mar 21 12:14:45.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:14:45.277: INFO: namespace: e2e-tests-downward-api-xt2c7, resource: bindings, ignored listing per whitelist Mar 21 12:14:45.306: INFO: namespace e2e-tests-downward-api-xt2c7 deletion completed in 6.112582413s • [SLOW TEST:10.281 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:14:45.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 21 12:14:45.432: INFO: Waiting up to 5m0s for pod "pod-8d43998c-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-n894n" to be "success or failure" Mar 21 12:14:45.454: INFO: Pod "pod-8d43998c-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.438905ms Mar 21 12:14:47.489: INFO: Pod "pod-8d43998c-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057441013s Mar 21 12:14:49.494: INFO: Pod "pod-8d43998c-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061595615s STEP: Saw pod success Mar 21 12:14:49.494: INFO: Pod "pod-8d43998c-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:14:50.100: INFO: Trying to get logs from node hunter-worker2 pod pod-8d43998c-6b6d-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:14:50.361: INFO: Waiting for pod pod-8d43998c-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:14:50.374: INFO: Pod pod-8d43998c-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:14:50.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n894n" for this suite. Mar 21 12:14:56.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:14:56.435: INFO: namespace: e2e-tests-emptydir-n894n, resource: bindings, ignored listing per whitelist Mar 21 12:14:56.469: INFO: namespace e2e-tests-emptydir-n894n deletion completed in 6.090818111s • [SLOW TEST:11.163 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:14:56.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 21 12:14:56.581: INFO: Waiting up to 5m0s for pod "pod-93ebde3c-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-glnrq" to be "success or failure" Mar 21 12:14:56.584: INFO: Pod "pod-93ebde3c-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.326163ms Mar 21 12:14:58.588: INFO: Pod "pod-93ebde3c-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00716856s Mar 21 12:15:00.592: INFO: Pod "pod-93ebde3c-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011426827s STEP: Saw pod success Mar 21 12:15:00.592: INFO: Pod "pod-93ebde3c-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:15:00.595: INFO: Trying to get logs from node hunter-worker2 pod pod-93ebde3c-6b6d-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:15:00.615: INFO: Waiting for pod pod-93ebde3c-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:15:00.620: INFO: Pod pod-93ebde3c-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:15:00.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-glnrq" for this suite. Mar 21 12:15:06.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:15:06.666: INFO: namespace: e2e-tests-emptydir-glnrq, resource: bindings, ignored listing per whitelist Mar 21 12:15:06.715: INFO: namespace e2e-tests-emptydir-glnrq deletion completed in 6.091919762s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:15:06.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 21 12:15:06.828: INFO: Waiting up to 5m0s for pod "pod-9a095958-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-f7l28" to be "success or failure" Mar 21 12:15:06.832: INFO: Pod "pod-9a095958-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165793ms Mar 21 12:15:08.835: INFO: Pod "pod-9a095958-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140944s Mar 21 12:15:10.839: INFO: Pod "pod-9a095958-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010589652s STEP: Saw pod success Mar 21 12:15:10.839: INFO: Pod "pod-9a095958-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:15:10.842: INFO: Trying to get logs from node hunter-worker2 pod pod-9a095958-6b6d-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:15:10.878: INFO: Waiting for pod pod-9a095958-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:15:10.898: INFO: Pod pod-9a095958-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:15:10.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f7l28" for this suite. Mar 21 12:15:16.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:15:17.015: INFO: namespace: e2e-tests-emptydir-f7l28, resource: bindings, ignored listing per whitelist Mar 21 12:15:17.026: INFO: namespace e2e-tests-emptydir-f7l28 deletion completed in 6.123382392s • [SLOW TEST:10.311 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:15:17.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 21 12:15:24.372: INFO: 9 pods remaining Mar 21 12:15:24.373: INFO: 0 pods has nil DeletionTimestamp Mar 21 12:15:24.373: INFO: Mar 21 12:15:25.300: INFO: 0 pods remaining Mar 21 12:15:25.300: INFO: 0 pods has nil DeletionTimestamp Mar 21 12:15:25.300: INFO: STEP: Gathering metrics W0321 12:15:25.606249 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 12:15:25.606: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:15:25.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xzcjg" for this suite. Mar 21 12:15:31.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:15:31.686: INFO: namespace: e2e-tests-gc-xzcjg, resource: bindings, ignored listing per whitelist Mar 21 12:15:31.733: INFO: namespace e2e-tests-gc-xzcjg deletion completed in 6.123978839s • [SLOW TEST:14.707 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:15:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a8f137c0-6b6d-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:15:31.853: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-lmbhn" to be "success or failure" Mar 21 12:15:31.903: INFO: Pod "pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.856191ms Mar 21 12:15:33.907: INFO: Pod "pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053710803s Mar 21 12:15:35.910: INFO: Pod "pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057271476s STEP: Saw pod success Mar 21 12:15:35.910: INFO: Pod "pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:15:35.913: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 21 12:15:35.927: INFO: Waiting for pod pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:15:35.932: INFO: Pod pod-configmaps-a8f3fcf7-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:15:35.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lmbhn" for this suite. Mar 21 12:15:41.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:15:42.001: INFO: namespace: e2e-tests-configmap-lmbhn, resource: bindings, ignored listing per whitelist Mar 21 12:15:42.033: INFO: namespace e2e-tests-configmap-lmbhn deletion completed in 6.097453141s • [SLOW TEST:10.300 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:15:42.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 12:15:42.152: INFO: Waiting up to 5m0s for pod "pod-af187bdd-6b6d-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-2jfjm" to be "success or failure" Mar 21 12:15:42.418: INFO: Pod "pod-af187bdd-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 266.507584ms Mar 21 12:15:44.422: INFO: Pod "pod-af187bdd-6b6d-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270243247s Mar 21 12:15:46.426: INFO: Pod "pod-af187bdd-6b6d-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274429826s STEP: Saw pod success Mar 21 12:15:46.426: INFO: Pod "pod-af187bdd-6b6d-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:15:46.429: INFO: Trying to get logs from node hunter-worker2 pod pod-af187bdd-6b6d-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:15:46.458: INFO: Waiting for pod pod-af187bdd-6b6d-11ea-946c-0242ac11000f to disappear Mar 21 12:15:46.528: INFO: Pod pod-af187bdd-6b6d-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:15:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2jfjm" for this suite. Mar 21 12:15:52.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:15:52.607: INFO: namespace: e2e-tests-emptydir-2jfjm, resource: bindings, ignored listing per whitelist Mar 21 12:15:52.664: INFO: namespace e2e-tests-emptydir-2jfjm deletion completed in 6.124509241s • [SLOW TEST:10.632 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:15:52.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-h4rbg STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-h4rbg STEP: Deleting pre-stop pod Mar 21 12:16:05.827: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:16:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-h4rbg" for this suite. Mar 21 12:16:43.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:16:43.866: INFO: namespace: e2e-tests-prestop-h4rbg, resource: bindings, ignored listing per whitelist Mar 21 12:16:43.938: INFO: namespace e2e-tests-prestop-h4rbg deletion completed in 38.099965626s • [SLOW TEST:51.274 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:16:43.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 12:16:44.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-g7xq2' Mar 21 12:16:46.116: INFO: stderr: "" Mar 21 12:16:46.116: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 21 12:16:46.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-g7xq2' Mar 21 12:17:51.729: INFO: stderr: "" Mar 21 12:17:51.729: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:17:51.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g7xq2" for this suite. Mar 21 12:17:57.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:17:57.847: INFO: namespace: e2e-tests-kubectl-g7xq2, resource: bindings, ignored listing per whitelist Mar 21 12:17:57.861: INFO: namespace e2e-tests-kubectl-g7xq2 deletion completed in 6.114482837s • [SLOW TEST:73.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:17:57.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pkj2b/configmap-test-000a1228-6b6e-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:17:57.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f" in namespace "e2e-tests-configmap-pkj2b" to be "success or failure" Mar 21 12:17:57.970: INFO: Pod "pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.443542ms Mar 21 12:17:59.974: INFO: Pod "pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018389958s Mar 21 12:18:01.978: INFO: Pod "pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022563357s STEP: Saw pod success Mar 21 12:18:01.978: INFO: Pod "pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:18:01.981: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f container env-test: STEP: delete the pod Mar 21 12:18:02.013: INFO: Waiting for pod pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f to disappear Mar 21 12:18:02.028: INFO: Pod pod-configmaps-000aaa5c-6b6e-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:18:02.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pkj2b" for this suite. Mar 21 12:18:08.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:18:08.089: INFO: namespace: e2e-tests-configmap-pkj2b, resource: bindings, ignored listing per whitelist Mar 21 12:18:08.119: INFO: namespace e2e-tests-configmap-pkj2b deletion completed in 6.087947561s • [SLOW TEST:10.257 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:18:08.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:18:08.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-t9mzj" to be "success or failure" Mar 21 12:18:08.238: INFO: Pod "downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.004009ms Mar 21 12:18:10.242: INFO: Pod "downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014267133s Mar 21 12:18:12.245: INFO: Pod "downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017189087s STEP: Saw pod success Mar 21 12:18:12.245: INFO: Pod "downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:18:12.248: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:18:12.269: INFO: Waiting for pod downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f to disappear Mar 21 12:18:12.274: INFO: Pod downwardapi-volume-06287d85-6b6e-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:18:12.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t9mzj" for this suite. Mar 21 12:18:18.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:18:18.296: INFO: namespace: e2e-tests-projected-t9mzj, resource: bindings, ignored listing per whitelist Mar 21 12:18:18.360: INFO: namespace e2e-tests-projected-t9mzj deletion completed in 6.08374194s • [SLOW TEST:10.241 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:18:18.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 21 12:18:22.991: INFO: Successfully updated pod "labelsupdate0c43127b-6b6e-11ea-946c-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:18:25.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9htp8" for this suite. Mar 21 12:18:47.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:18:47.157: INFO: namespace: e2e-tests-projected-9htp8, resource: bindings, ignored listing per whitelist Mar 21 12:18:47.159: INFO: namespace e2e-tests-projected-9htp8 deletion completed in 22.126202801s • [SLOW TEST:28.799 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:18:47.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:18:51.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-f8db7" for this suite. Mar 21 12:18:57.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:18:57.509: INFO: namespace: e2e-tests-emptydir-wrapper-f8db7, resource: bindings, ignored listing per whitelist Mar 21 12:18:57.535: INFO: namespace e2e-tests-emptydir-wrapper-f8db7 deletion completed in 6.136545239s • [SLOW TEST:10.376 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:18:57.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-23a11966-6b6e-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-23a11966-6b6e-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:20:26.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xrdwg" for this suite. Mar 21 12:20:48.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:20:48.193: INFO: namespace: e2e-tests-projected-xrdwg, resource: bindings, ignored listing per whitelist Mar 21 12:20:48.260: INFO: namespace e2e-tests-projected-xrdwg deletion completed in 22.122116489s • [SLOW TEST:110.725 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:20:48.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-65a18bac-6b6e-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:20:48.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-jspjn" to be "success or failure" Mar 21 12:20:48.427: INFO: Pod "pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797931ms Mar 21 12:20:50.431: INFO: Pod "pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734419s Mar 21 12:20:52.435: INFO: Pod "pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011433203s STEP: Saw pod success Mar 21 12:20:52.435: INFO: Pod "pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:20:52.437: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 12:20:52.452: INFO: Waiting for pod pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f to disappear Mar 21 12:20:52.469: INFO: Pod pod-projected-configmaps-65a5c42e-6b6e-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:20:52.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jspjn" for this suite. Mar 21 12:20:58.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:20:58.512: INFO: namespace: e2e-tests-projected-jspjn, resource: bindings, ignored listing per whitelist Mar 21 12:20:58.557: INFO: namespace e2e-tests-projected-jspjn deletion completed in 6.08505279s • [SLOW TEST:10.297 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:20:58.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 21 12:20:58.645: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-hqzqv" to be "success or failure" Mar 21 12:20:58.662: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.634866ms Mar 21 12:21:00.667: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021819484s Mar 21 12:21:02.671: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025941047s STEP: Saw pod success Mar 21 12:21:02.671: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 21 12:21:02.674: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 21 12:21:02.736: INFO: Waiting for pod pod-host-path-test to disappear Mar 21 12:21:02.740: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:21:02.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-hqzqv" for this suite. Mar 21 12:21:08.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:21:08.795: INFO: namespace: e2e-tests-hostpath-hqzqv, resource: bindings, ignored listing per whitelist Mar 21 12:21:08.830: INFO: namespace e2e-tests-hostpath-hqzqv deletion completed in 6.086533347s • [SLOW TEST:10.273 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:21:08.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:21:08.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 21 12:21:09.025: INFO: stderr: "" Mar 21 12:21:09.025: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 21 12:21:09.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czrpn' Mar 21 12:21:09.289: INFO: stderr: "" Mar 21 12:21:09.289: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 21 12:21:09.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-czrpn' Mar 21 12:21:09.539: INFO: stderr: "" Mar 21 12:21:09.539: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 21 12:21:10.560: INFO: Selector matched 1 pods for map[app:redis] Mar 21 12:21:10.560: INFO: Found 0 / 1 Mar 21 12:21:11.561: INFO: Selector matched 1 pods for map[app:redis] Mar 21 12:21:11.561: INFO: Found 0 / 1 Mar 21 12:21:12.544: INFO: Selector matched 1 pods for map[app:redis] Mar 21 12:21:12.544: INFO: Found 1 / 1 Mar 21 12:21:12.544: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 21 12:21:12.547: INFO: Selector matched 1 pods for map[app:redis] Mar 21 12:21:12.547: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 21 12:21:12.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4j6j2 --namespace=e2e-tests-kubectl-czrpn' Mar 21 12:21:12.680: INFO: stderr: "" Mar 21 12:21:12.681: INFO: stdout: "Name: redis-master-4j6j2\nNamespace: e2e-tests-kubectl-czrpn\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Sat, 21 Mar 2020 12:21:09 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.174\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://397d39203f8266d050a5479db0a81923a807f239d7053c276be9110a7e15ffb7\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 21 Mar 2020 12:21:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zb4zx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zb4zx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zb4zx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned e2e-tests-kubectl-czrpn/redis-master-4j6j2 to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Mar 21 12:21:12.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-czrpn' Mar 21 12:21:12.806: INFO: stderr: "" Mar 21 12:21:12.807: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-czrpn\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-4j6j2\n" Mar 21 12:21:12.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-czrpn' Mar 21 12:21:12.917: INFO: stderr: "" Mar 21 12:21:12.917: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-czrpn\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.72.255\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.174:6379\nSession Affinity: None\nEvents: \n" Mar 21 12:21:12.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 21 12:21:13.089: INFO: stderr: "" Mar 21 12:21:13.089: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 21 Mar 2020 12:21:04 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 21 Mar 2020 12:21:04 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 21 Mar 2020 12:21:04 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 21 Mar 2020 12:21:04 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d17h\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5d17h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5d17h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5d17h\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d17h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5d17h\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d17h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 21 12:21:13.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-czrpn' Mar 21 12:21:13.201: INFO: stderr: "" Mar 21 12:21:13.201: INFO: stdout: "Name: e2e-tests-kubectl-czrpn\nLabels: e2e-framework=kubectl\n e2e-run=40c6138a-6b61-11ea-946c-0242ac11000f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:21:13.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-czrpn" for this suite. Mar 21 12:21:35.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:21:35.240: INFO: namespace: e2e-tests-kubectl-czrpn, resource: bindings, ignored listing per whitelist Mar 21 12:21:35.302: INFO: namespace e2e-tests-kubectl-czrpn deletion completed in 22.096784401s • [SLOW TEST:26.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:21:35.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6k44t STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 12:21:35.451: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 21 12:21:59.534: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.176:8080/dial?request=hostName&protocol=udp&host=10.244.2.46&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6k44t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:21:59.534: INFO: >>> kubeConfig: /root/.kube/config I0321 12:21:59.574022 6 log.go:172] (0xc0009f1970) (0xc000f52aa0) Create stream I0321 12:21:59.574054 6 log.go:172] (0xc0009f1970) (0xc000f52aa0) Stream added, broadcasting: 1 I0321 12:21:59.576623 6 log.go:172] (0xc0009f1970) Reply frame received for 1 I0321 12:21:59.576671 6 log.go:172] (0xc0009f1970) (0xc001be5ae0) Create stream I0321 12:21:59.576687 6 log.go:172] (0xc0009f1970) (0xc001be5ae0) Stream added, broadcasting: 3 I0321 12:21:59.577762 6 log.go:172] (0xc0009f1970) Reply frame received for 3 I0321 12:21:59.577809 6 log.go:172] (0xc0009f1970) (0xc0023b9f40) Create stream I0321 12:21:59.577819 6 log.go:172] (0xc0009f1970) (0xc0023b9f40) Stream added, broadcasting: 5 I0321 12:21:59.578711 6 log.go:172] (0xc0009f1970) Reply frame received for 5 I0321 12:21:59.686557 6 log.go:172] (0xc0009f1970) Data frame received for 3 I0321 12:21:59.686588 6 log.go:172] (0xc001be5ae0) (3) Data frame handling I0321 12:21:59.686605 6 log.go:172] (0xc001be5ae0) (3) Data frame sent I0321 12:21:59.687037 6 log.go:172] (0xc0009f1970) Data frame received for 5 I0321 12:21:59.687064 6 log.go:172] (0xc0023b9f40) (5) Data frame handling I0321 12:21:59.687092 6 log.go:172] (0xc0009f1970) Data frame received for 3 I0321 12:21:59.687121 6 log.go:172] (0xc001be5ae0) (3) Data frame handling I0321 12:21:59.688879 6 log.go:172] (0xc0009f1970) Data frame received for 1 I0321 12:21:59.688911 6 log.go:172] (0xc000f52aa0) (1) Data frame handling I0321 12:21:59.688933 6 log.go:172] (0xc000f52aa0) (1) Data frame sent I0321 12:21:59.688959 6 log.go:172] (0xc0009f1970) (0xc000f52aa0) Stream removed, broadcasting: 1 I0321 12:21:59.688986 6 log.go:172] (0xc0009f1970) Go away received I0321 12:21:59.689097 6 log.go:172] (0xc0009f1970) (0xc000f52aa0) Stream removed, broadcasting: 1 I0321 12:21:59.689277 6 log.go:172] (0xc0009f1970) (0xc001be5ae0) Stream removed, broadcasting: 3 I0321 12:21:59.689307 6 log.go:172] (0xc0009f1970) (0xc0023b9f40) Stream removed, broadcasting: 5 Mar 21 12:21:59.689: INFO: Waiting for endpoints: map[] Mar 21 12:21:59.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.176:8080/dial?request=hostName&protocol=udp&host=10.244.1.175&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6k44t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:21:59.692: INFO: >>> kubeConfig: /root/.kube/config I0321 12:21:59.724662 6 log.go:172] (0xc00033def0) (0xc0004fd220) Create stream I0321 12:21:59.724691 6 log.go:172] (0xc00033def0) (0xc0004fd220) Stream added, broadcasting: 1 I0321 12:21:59.727361 6 log.go:172] (0xc00033def0) Reply frame received for 1 I0321 12:21:59.727409 6 log.go:172] (0xc00033def0) (0xc0004fd5e0) Create stream I0321 12:21:59.727423 6 log.go:172] (0xc00033def0) (0xc0004fd5e0) Stream added, broadcasting: 3 I0321 12:21:59.728448 6 log.go:172] (0xc00033def0) Reply frame received for 3 I0321 12:21:59.728489 6 log.go:172] (0xc00033def0) (0xc001be5d60) Create stream I0321 12:21:59.728509 6 log.go:172] (0xc00033def0) (0xc001be5d60) Stream added, broadcasting: 5 I0321 12:21:59.729597 6 log.go:172] (0xc00033def0) Reply frame received for 5 I0321 12:21:59.791970 6 log.go:172] (0xc00033def0) Data frame received for 3 I0321 12:21:59.792029 6 log.go:172] (0xc0004fd5e0) (3) Data frame handling I0321 12:21:59.792060 6 log.go:172] (0xc0004fd5e0) (3) Data frame sent I0321 12:21:59.792797 6 log.go:172] (0xc00033def0) Data frame received for 3 I0321 12:21:59.792848 6 log.go:172] (0xc0004fd5e0) (3) Data frame handling I0321 12:21:59.792884 6 log.go:172] (0xc00033def0) Data frame received for 5 I0321 12:21:59.792904 6 log.go:172] (0xc001be5d60) (5) Data frame handling I0321 12:21:59.794623 6 log.go:172] (0xc00033def0) Data frame received for 1 I0321 12:21:59.794658 6 log.go:172] (0xc0004fd220) (1) Data frame handling I0321 12:21:59.794677 6 log.go:172] (0xc0004fd220) (1) Data frame sent I0321 12:21:59.794696 6 log.go:172] (0xc00033def0) (0xc0004fd220) Stream removed, broadcasting: 1 I0321 12:21:59.794720 6 log.go:172] (0xc00033def0) Go away received I0321 12:21:59.794880 6 log.go:172] (0xc00033def0) (0xc0004fd220) Stream removed, broadcasting: 1 I0321 12:21:59.794908 6 log.go:172] (0xc00033def0) (0xc0004fd5e0) Stream removed, broadcasting: 3 I0321 12:21:59.794922 6 log.go:172] (0xc00033def0) (0xc001be5d60) Stream removed, broadcasting: 5 Mar 21 12:21:59.794: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:21:59.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6k44t" for this suite. Mar 21 12:22:21.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:22:21.877: INFO: namespace: e2e-tests-pod-network-test-6k44t, resource: bindings, ignored listing per whitelist Mar 21 12:22:21.909: INFO: namespace e2e-tests-pod-network-test-6k44t deletion completed in 22.109578551s • [SLOW TEST:46.607 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:22:21.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9d7005ac-6b6e-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 12:22:22.037: INFO: Waiting up to 5m0s for pod "pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-f6xms" to be "success or failure" Mar 21 12:22:22.056: INFO: Pod "pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.745086ms Mar 21 12:22:24.072: INFO: Pod "pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034329201s Mar 21 12:22:26.076: INFO: Pod "pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038422275s STEP: Saw pod success Mar 21 12:22:26.076: INFO: Pod "pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:22:26.079: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 12:22:26.095: INFO: Waiting for pod pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f to disappear Mar 21 12:22:26.106: INFO: Pod pod-secrets-9d722c60-6b6e-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:22:26.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-f6xms" for this suite. Mar 21 12:22:32.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:22:32.150: INFO: namespace: e2e-tests-secrets-f6xms, resource: bindings, ignored listing per whitelist Mar 21 12:22:32.207: INFO: namespace e2e-tests-secrets-f6xms deletion completed in 6.098988023s • [SLOW TEST:10.298 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:22:32.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:22:32.379: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:22:33.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-554bq" for this suite. Mar 21 12:22:39.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:22:39.579: INFO: namespace: e2e-tests-custom-resource-definition-554bq, resource: bindings, ignored listing per whitelist Mar 21 12:22:39.630: INFO: namespace e2e-tests-custom-resource-definition-554bq deletion completed in 6.09766769s • [SLOW TEST:7.423 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:22:39.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-a800816e-6b6e-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a800816e-6b6e-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:22:45.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-crvc6" for this suite. Mar 21 12:23:07.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:23:07.851: INFO: namespace: e2e-tests-configmap-crvc6, resource: bindings, ignored listing per whitelist Mar 21 12:23:07.898: INFO: namespace e2e-tests-configmap-crvc6 deletion completed in 22.092576022s • [SLOW TEST:28.268 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:23:07.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-t89cg I0321 12:23:08.006114 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-t89cg, replica count: 1 I0321 12:23:09.056538 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 12:23:10.056758 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 12:23:11.056954 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 12:23:11.185: INFO: Created: latency-svc-t79lf Mar 21 12:23:11.238: INFO: Got endpoints: latency-svc-t79lf [81.134284ms] Mar 21 12:23:11.263: INFO: Created: latency-svc-g77rw Mar 21 12:23:11.277: INFO: Got endpoints: latency-svc-g77rw [38.473363ms] Mar 21 12:23:11.293: INFO: Created: latency-svc-hv8t9 Mar 21 12:23:11.307: INFO: Got endpoints: latency-svc-hv8t9 [68.693782ms] Mar 21 12:23:11.323: INFO: Created: latency-svc-jgmpx Mar 21 12:23:11.352: INFO: Got endpoints: latency-svc-jgmpx [114.162187ms] Mar 21 12:23:11.359: INFO: Created: latency-svc-bc7zn Mar 21 12:23:11.373: INFO: Got endpoints: latency-svc-bc7zn [134.877963ms] Mar 21 12:23:11.395: INFO: Created: latency-svc-kmhpr Mar 21 12:23:11.410: INFO: Got endpoints: latency-svc-kmhpr [171.826941ms] Mar 21 12:23:11.431: INFO: Created: latency-svc-5gmzp Mar 21 12:23:11.439: INFO: Got endpoints: latency-svc-5gmzp [201.246303ms] Mar 21 12:23:11.492: INFO: Created: latency-svc-stgcg Mar 21 12:23:11.495: INFO: Got endpoints: latency-svc-stgcg [257.247639ms] Mar 21 12:23:11.527: INFO: Created: latency-svc-vtbb2 Mar 21 12:23:11.542: INFO: Got endpoints: latency-svc-vtbb2 [303.762267ms] Mar 21 12:23:11.563: INFO: Created: latency-svc-7s2tz Mar 21 12:23:11.572: INFO: Got endpoints: latency-svc-7s2tz [333.366163ms] Mar 21 12:23:11.635: INFO: Created: latency-svc-gnf9z Mar 21 12:23:11.638: INFO: Got endpoints: latency-svc-gnf9z [399.701893ms] Mar 21 12:23:11.665: INFO: Created: latency-svc-8fpf5 Mar 21 12:23:11.675: INFO: Got endpoints: latency-svc-8fpf5 [436.493565ms] Mar 21 12:23:11.695: INFO: Created: latency-svc-rp4s7 Mar 21 12:23:11.711: INFO: Got endpoints: latency-svc-rp4s7 [472.633762ms] Mar 21 12:23:11.731: INFO: Created: latency-svc-24gs8 Mar 21 12:23:11.772: INFO: Got endpoints: latency-svc-24gs8 [533.216968ms] Mar 21 12:23:11.798: INFO: Created: latency-svc-mqmwn Mar 21 12:23:11.844: INFO: Got endpoints: latency-svc-mqmwn [605.19699ms] Mar 21 12:23:11.922: INFO: Created: latency-svc-p9z49 Mar 21 12:23:11.947: INFO: Got endpoints: latency-svc-p9z49 [708.347087ms] Mar 21 12:23:11.947: INFO: Created: latency-svc-t7cts Mar 21 12:23:11.958: INFO: Got endpoints: latency-svc-t7cts [681.815568ms] Mar 21 12:23:12.001: INFO: Created: latency-svc-j59ks Mar 21 12:23:12.013: INFO: Got endpoints: latency-svc-j59ks [706.038465ms] Mar 21 12:23:12.085: INFO: Created: latency-svc-qz8sm Mar 21 12:23:12.086: INFO: Got endpoints: latency-svc-qz8sm [733.783086ms] Mar 21 12:23:12.127: INFO: Created: latency-svc-5n9fz Mar 21 12:23:12.139: INFO: Got endpoints: latency-svc-5n9fz [766.066865ms] Mar 21 12:23:12.163: INFO: Created: latency-svc-lzmlt Mar 21 12:23:12.175: INFO: Got endpoints: latency-svc-lzmlt [765.315441ms] Mar 21 12:23:12.221: INFO: Created: latency-svc-6sr62 Mar 21 12:23:12.225: INFO: Got endpoints: latency-svc-6sr62 [785.396562ms] Mar 21 12:23:12.241: INFO: Created: latency-svc-t8qqs Mar 21 12:23:12.254: INFO: Got endpoints: latency-svc-t8qqs [758.459328ms] Mar 21 12:23:12.271: INFO: Created: latency-svc-rmmk5 Mar 21 12:23:12.290: INFO: Got endpoints: latency-svc-rmmk5 [747.667351ms] Mar 21 12:23:12.307: INFO: Created: latency-svc-wqr95 Mar 21 12:23:12.383: INFO: Got endpoints: latency-svc-wqr95 [810.860515ms] Mar 21 12:23:12.386: INFO: Created: latency-svc-2fnhz Mar 21 12:23:12.392: INFO: Got endpoints: latency-svc-2fnhz [754.044973ms] Mar 21 12:23:12.423: INFO: Created: latency-svc-7nvzg Mar 21 12:23:12.441: INFO: Got endpoints: latency-svc-7nvzg [766.287763ms] Mar 21 12:23:12.481: INFO: Created: latency-svc-9jdlz Mar 21 12:23:12.520: INFO: Got endpoints: latency-svc-9jdlz [809.081605ms] Mar 21 12:23:12.542: INFO: Created: latency-svc-mgspv Mar 21 12:23:12.555: INFO: Got endpoints: latency-svc-mgspv [783.614232ms] Mar 21 12:23:12.577: INFO: Created: latency-svc-kds2b Mar 21 12:23:12.592: INFO: Got endpoints: latency-svc-kds2b [748.265969ms] Mar 21 12:23:12.614: INFO: Created: latency-svc-dwhtk Mar 21 12:23:12.670: INFO: Got endpoints: latency-svc-dwhtk [722.997855ms] Mar 21 12:23:12.672: INFO: Created: latency-svc-t7znl Mar 21 12:23:12.676: INFO: Got endpoints: latency-svc-t7znl [717.536485ms] Mar 21 12:23:12.702: INFO: Created: latency-svc-hd7q8 Mar 21 12:23:12.712: INFO: Got endpoints: latency-svc-hd7q8 [699.284482ms] Mar 21 12:23:12.733: INFO: Created: latency-svc-cn8kk Mar 21 12:23:12.749: INFO: Got endpoints: latency-svc-cn8kk [663.316954ms] Mar 21 12:23:12.769: INFO: Created: latency-svc-tc52l Mar 21 12:23:12.826: INFO: Got endpoints: latency-svc-tc52l [686.754158ms] Mar 21 12:23:12.828: INFO: Created: latency-svc-hxdjc Mar 21 12:23:12.834: INFO: Got endpoints: latency-svc-hxdjc [658.25867ms] Mar 21 12:23:12.853: INFO: Created: latency-svc-lc8md Mar 21 12:23:12.864: INFO: Got endpoints: latency-svc-lc8md [638.93906ms] Mar 21 12:23:12.883: INFO: Created: latency-svc-b4vqp Mar 21 12:23:12.894: INFO: Got endpoints: latency-svc-b4vqp [640.069687ms] Mar 21 12:23:12.913: INFO: Created: latency-svc-wp8hh Mar 21 12:23:12.925: INFO: Got endpoints: latency-svc-wp8hh [634.834067ms] Mar 21 12:23:12.982: INFO: Created: latency-svc-2tj2d Mar 21 12:23:12.986: INFO: Got endpoints: latency-svc-2tj2d [602.912954ms] Mar 21 12:23:13.027: INFO: Created: latency-svc-nr8nm Mar 21 12:23:13.039: INFO: Got endpoints: latency-svc-nr8nm [646.854866ms] Mar 21 12:23:13.075: INFO: Created: latency-svc-mpngw Mar 21 12:23:13.143: INFO: Got endpoints: latency-svc-mpngw [701.57854ms] Mar 21 12:23:13.146: INFO: Created: latency-svc-8jmw6 Mar 21 12:23:13.159: INFO: Got endpoints: latency-svc-8jmw6 [638.315372ms] Mar 21 12:23:13.183: INFO: Created: latency-svc-wq474 Mar 21 12:23:13.197: INFO: Got endpoints: latency-svc-wq474 [641.223828ms] Mar 21 12:23:13.225: INFO: Created: latency-svc-h6k4g Mar 21 12:23:13.239: INFO: Got endpoints: latency-svc-h6k4g [646.633905ms] Mar 21 12:23:13.281: INFO: Created: latency-svc-8zqx8 Mar 21 12:23:13.284: INFO: Got endpoints: latency-svc-8zqx8 [613.765024ms] Mar 21 12:23:13.315: INFO: Created: latency-svc-cxrdb Mar 21 12:23:13.345: INFO: Got endpoints: latency-svc-cxrdb [668.924412ms] Mar 21 12:23:13.375: INFO: Created: latency-svc-pxxmz Mar 21 12:23:13.424: INFO: Got endpoints: latency-svc-pxxmz [711.914823ms] Mar 21 12:23:13.428: INFO: Created: latency-svc-gshj2 Mar 21 12:23:13.444: INFO: Got endpoints: latency-svc-gshj2 [694.128738ms] Mar 21 12:23:13.472: INFO: Created: latency-svc-lcv9d Mar 21 12:23:13.480: INFO: Got endpoints: latency-svc-lcv9d [653.868572ms] Mar 21 12:23:13.501: INFO: Created: latency-svc-94txp Mar 21 12:23:13.599: INFO: Got endpoints: latency-svc-94txp [764.717375ms] Mar 21 12:23:13.603: INFO: Created: latency-svc-g9j7z Mar 21 12:23:13.613: INFO: Got endpoints: latency-svc-g9j7z [748.669646ms] Mar 21 12:23:13.633: INFO: Created: latency-svc-29flz Mar 21 12:23:13.649: INFO: Got endpoints: latency-svc-29flz [754.877651ms] Mar 21 12:23:13.669: INFO: Created: latency-svc-bvg5w Mar 21 12:23:13.685: INFO: Got endpoints: latency-svc-bvg5w [760.491461ms] Mar 21 12:23:13.743: INFO: Created: latency-svc-b5d96 Mar 21 12:23:13.745: INFO: Got endpoints: latency-svc-b5d96 [759.636833ms] Mar 21 12:23:13.777: INFO: Created: latency-svc-xcs6d Mar 21 12:23:13.788: INFO: Got endpoints: latency-svc-xcs6d [748.623577ms] Mar 21 12:23:13.807: INFO: Created: latency-svc-jm5px Mar 21 12:23:13.818: INFO: Got endpoints: latency-svc-jm5px [675.153181ms] Mar 21 12:23:13.910: INFO: Created: latency-svc-zlcxb Mar 21 12:23:13.912: INFO: Got endpoints: latency-svc-zlcxb [753.66815ms] Mar 21 12:23:13.975: INFO: Created: latency-svc-b4ds8 Mar 21 12:23:13.986: INFO: Got endpoints: latency-svc-b4ds8 [789.894163ms] Mar 21 12:23:14.059: INFO: Created: latency-svc-khpcf Mar 21 12:23:14.065: INFO: Got endpoints: latency-svc-khpcf [825.903916ms] Mar 21 12:23:14.089: INFO: Created: latency-svc-ndb7q Mar 21 12:23:14.101: INFO: Got endpoints: latency-svc-ndb7q [817.673795ms] Mar 21 12:23:14.131: INFO: Created: latency-svc-qrgnr Mar 21 12:23:14.143: INFO: Got endpoints: latency-svc-qrgnr [798.386969ms] Mar 21 12:23:14.197: INFO: Created: latency-svc-z9sz5 Mar 21 12:23:14.200: INFO: Got endpoints: latency-svc-z9sz5 [776.104049ms] Mar 21 12:23:14.221: INFO: Created: latency-svc-p8qr9 Mar 21 12:23:14.234: INFO: Got endpoints: latency-svc-p8qr9 [790.231881ms] Mar 21 12:23:14.250: INFO: Created: latency-svc-tw7rt Mar 21 12:23:14.264: INFO: Got endpoints: latency-svc-tw7rt [784.34627ms] Mar 21 12:23:14.280: INFO: Created: latency-svc-qgk8s Mar 21 12:23:14.295: INFO: Got endpoints: latency-svc-qgk8s [696.078014ms] Mar 21 12:23:14.340: INFO: Created: latency-svc-tjknm Mar 21 12:23:14.358: INFO: Got endpoints: latency-svc-tjknm [745.321194ms] Mar 21 12:23:14.395: INFO: Created: latency-svc-znsbs Mar 21 12:23:14.410: INFO: Got endpoints: latency-svc-znsbs [760.485395ms] Mar 21 12:23:14.485: INFO: Created: latency-svc-5jhnc Mar 21 12:23:14.487: INFO: Got endpoints: latency-svc-5jhnc [801.468865ms] Mar 21 12:23:14.526: INFO: Created: latency-svc-tmldq Mar 21 12:23:14.542: INFO: Got endpoints: latency-svc-tmldq [796.399894ms] Mar 21 12:23:14.563: INFO: Created: latency-svc-q9cmk Mar 21 12:23:14.571: INFO: Got endpoints: latency-svc-q9cmk [783.697204ms] Mar 21 12:23:14.628: INFO: Created: latency-svc-pszsh Mar 21 12:23:14.632: INFO: Got endpoints: latency-svc-pszsh [813.632215ms] Mar 21 12:23:14.659: INFO: Created: latency-svc-qhc6x Mar 21 12:23:14.674: INFO: Got endpoints: latency-svc-qhc6x [761.831552ms] Mar 21 12:23:14.706: INFO: Created: latency-svc-qwvvg Mar 21 12:23:14.778: INFO: Got endpoints: latency-svc-qwvvg [791.093892ms] Mar 21 12:23:14.780: INFO: Created: latency-svc-t4bqt Mar 21 12:23:14.789: INFO: Got endpoints: latency-svc-t4bqt [724.20538ms] Mar 21 12:23:14.810: INFO: Created: latency-svc-7swhx Mar 21 12:23:14.819: INFO: Got endpoints: latency-svc-7swhx [717.776705ms] Mar 21 12:23:14.839: INFO: Created: latency-svc-66cm4 Mar 21 12:23:14.850: INFO: Got endpoints: latency-svc-66cm4 [706.136962ms] Mar 21 12:23:14.868: INFO: Created: latency-svc-nwbnz Mar 21 12:23:14.922: INFO: Got endpoints: latency-svc-nwbnz [721.840323ms] Mar 21 12:23:14.988: INFO: Created: latency-svc-k58vq Mar 21 12:23:15.000: INFO: Got endpoints: latency-svc-k58vq [766.071182ms] Mar 21 12:23:15.065: INFO: Created: latency-svc-zglhg Mar 21 12:23:15.068: INFO: Got endpoints: latency-svc-zglhg [803.559299ms] Mar 21 12:23:15.384: INFO: Created: latency-svc-v9drf Mar 21 12:23:15.402: INFO: Got endpoints: latency-svc-v9drf [1.107024478s] Mar 21 12:23:15.659: INFO: Created: latency-svc-g2fwc Mar 21 12:23:15.680: INFO: Got endpoints: latency-svc-g2fwc [1.321599273s] Mar 21 12:23:15.703: INFO: Created: latency-svc-75mgj Mar 21 12:23:15.726: INFO: Got endpoints: latency-svc-75mgj [1.316336863s] Mar 21 12:23:15.749: INFO: Created: latency-svc-jhkt9 Mar 21 12:23:15.790: INFO: Got endpoints: latency-svc-jhkt9 [109.733831ms] Mar 21 12:23:15.797: INFO: Created: latency-svc-2wpfz Mar 21 12:23:15.811: INFO: Got endpoints: latency-svc-2wpfz [1.323881948s] Mar 21 12:23:15.836: INFO: Created: latency-svc-wndxf Mar 21 12:23:15.849: INFO: Got endpoints: latency-svc-wndxf [1.307466692s] Mar 21 12:23:15.882: INFO: Created: latency-svc-klrfs Mar 21 12:23:15.951: INFO: Got endpoints: latency-svc-klrfs [1.379558966s] Mar 21 12:23:15.960: INFO: Created: latency-svc-b65v8 Mar 21 12:23:15.974: INFO: Got endpoints: latency-svc-b65v8 [1.341949453s] Mar 21 12:23:15.995: INFO: Created: latency-svc-z2kvp Mar 21 12:23:16.004: INFO: Got endpoints: latency-svc-z2kvp [1.329678311s] Mar 21 12:23:16.032: INFO: Created: latency-svc-f75gk Mar 21 12:23:16.040: INFO: Got endpoints: latency-svc-f75gk [1.262321778s] Mar 21 12:23:16.090: INFO: Created: latency-svc-f6z6x Mar 21 12:23:16.092: INFO: Got endpoints: latency-svc-f6z6x [1.302787355s] Mar 21 12:23:16.122: INFO: Created: latency-svc-7swl7 Mar 21 12:23:16.137: INFO: Got endpoints: latency-svc-7swl7 [1.317798429s] Mar 21 12:23:16.191: INFO: Created: latency-svc-fvx5v Mar 21 12:23:16.238: INFO: Got endpoints: latency-svc-fvx5v [1.388824468s] Mar 21 12:23:16.259: INFO: Created: latency-svc-pp6vh Mar 21 12:23:16.276: INFO: Got endpoints: latency-svc-pp6vh [1.353744447s] Mar 21 12:23:16.295: INFO: Created: latency-svc-jt582 Mar 21 12:23:16.319: INFO: Got endpoints: latency-svc-jt582 [1.319355927s] Mar 21 12:23:16.395: INFO: Created: latency-svc-9smgt Mar 21 12:23:16.402: INFO: Got endpoints: latency-svc-9smgt [1.334194969s] Mar 21 12:23:16.422: INFO: Created: latency-svc-t9bmb Mar 21 12:23:16.432: INFO: Got endpoints: latency-svc-t9bmb [1.030570786s] Mar 21 12:23:16.457: INFO: Created: latency-svc-pp448 Mar 21 12:23:16.468: INFO: Got endpoints: latency-svc-pp448 [742.450841ms] Mar 21 12:23:16.538: INFO: Created: latency-svc-rw4k6 Mar 21 12:23:16.559: INFO: Got endpoints: latency-svc-rw4k6 [769.802033ms] Mar 21 12:23:16.584: INFO: Created: latency-svc-hhvg6 Mar 21 12:23:16.595: INFO: Got endpoints: latency-svc-hhvg6 [784.376451ms] Mar 21 12:23:16.614: INFO: Created: latency-svc-f5dwv Mar 21 12:23:16.626: INFO: Got endpoints: latency-svc-f5dwv [776.204268ms] Mar 21 12:23:16.683: INFO: Created: latency-svc-c4lwg Mar 21 12:23:16.685: INFO: Got endpoints: latency-svc-c4lwg [734.072065ms] Mar 21 12:23:16.728: INFO: Created: latency-svc-4fzlj Mar 21 12:23:16.740: INFO: Got endpoints: latency-svc-4fzlj [766.62583ms] Mar 21 12:23:16.758: INFO: Created: latency-svc-n8fpn Mar 21 12:23:16.771: INFO: Got endpoints: latency-svc-n8fpn [766.640388ms] Mar 21 12:23:16.827: INFO: Created: latency-svc-rmbkl Mar 21 12:23:16.830: INFO: Got endpoints: latency-svc-rmbkl [789.636045ms] Mar 21 12:23:16.854: INFO: Created: latency-svc-wpm8r Mar 21 12:23:16.877: INFO: Got endpoints: latency-svc-wpm8r [785.809024ms] Mar 21 12:23:16.902: INFO: Created: latency-svc-q7d6w Mar 21 12:23:16.910: INFO: Got endpoints: latency-svc-q7d6w [772.566335ms] Mar 21 12:23:16.965: INFO: Created: latency-svc-7fn7k Mar 21 12:23:16.967: INFO: Got endpoints: latency-svc-7fn7k [728.093471ms] Mar 21 12:23:16.998: INFO: Created: latency-svc-97w4t Mar 21 12:23:17.012: INFO: Got endpoints: latency-svc-97w4t [735.832336ms] Mar 21 12:23:17.034: INFO: Created: latency-svc-trgs7 Mar 21 12:23:17.049: INFO: Got endpoints: latency-svc-trgs7 [729.009217ms] Mar 21 12:23:17.113: INFO: Created: latency-svc-t6v2w Mar 21 12:23:17.115: INFO: Got endpoints: latency-svc-t6v2w [713.192146ms] Mar 21 12:23:17.142: INFO: Created: latency-svc-4cmht Mar 21 12:23:17.151: INFO: Got endpoints: latency-svc-4cmht [718.475566ms] Mar 21 12:23:17.171: INFO: Created: latency-svc-tlxtc Mar 21 12:23:17.187: INFO: Got endpoints: latency-svc-tlxtc [718.979031ms] Mar 21 12:23:17.251: INFO: Created: latency-svc-hbg9p Mar 21 12:23:17.254: INFO: Got endpoints: latency-svc-hbg9p [694.254861ms] Mar 21 12:23:17.304: INFO: Created: latency-svc-vhzcd Mar 21 12:23:17.320: INFO: Got endpoints: latency-svc-vhzcd [724.916836ms] Mar 21 12:23:17.340: INFO: Created: latency-svc-5nlrv Mar 21 12:23:17.478: INFO: Got endpoints: latency-svc-5nlrv [852.860732ms] Mar 21 12:23:17.481: INFO: Created: latency-svc-jhm9x Mar 21 12:23:17.488: INFO: Got endpoints: latency-svc-jhm9x [803.096492ms] Mar 21 12:23:17.688: INFO: Created: latency-svc-fhtqp Mar 21 12:23:17.693: INFO: Got endpoints: latency-svc-fhtqp [952.460983ms] Mar 21 12:23:17.736: INFO: Created: latency-svc-fvjzc Mar 21 12:23:17.747: INFO: Got endpoints: latency-svc-fvjzc [975.937754ms] Mar 21 12:23:17.765: INFO: Created: latency-svc-q2f7k Mar 21 12:23:17.862: INFO: Got endpoints: latency-svc-q2f7k [1.031767166s] Mar 21 12:23:17.863: INFO: Created: latency-svc-sqwc6 Mar 21 12:23:17.873: INFO: Got endpoints: latency-svc-sqwc6 [995.39536ms] Mar 21 12:23:17.891: INFO: Created: latency-svc-99pjz Mar 21 12:23:17.903: INFO: Got endpoints: latency-svc-99pjz [993.641553ms] Mar 21 12:23:17.921: INFO: Created: latency-svc-vf879 Mar 21 12:23:17.934: INFO: Got endpoints: latency-svc-vf879 [967.422188ms] Mar 21 12:23:18.042: INFO: Created: latency-svc-4xtnp Mar 21 12:23:18.045: INFO: Got endpoints: latency-svc-4xtnp [1.032638699s] Mar 21 12:23:18.070: INFO: Created: latency-svc-52xpx Mar 21 12:23:18.090: INFO: Got endpoints: latency-svc-52xpx [1.041918067s] Mar 21 12:23:18.119: INFO: Created: latency-svc-2v9d6 Mar 21 12:23:18.179: INFO: Got endpoints: latency-svc-2v9d6 [1.06345933s] Mar 21 12:23:18.191: INFO: Created: latency-svc-nt79s Mar 21 12:23:18.784: INFO: Got endpoints: latency-svc-nt79s [1.633394331s] Mar 21 12:23:19.320: INFO: Created: latency-svc-ch94t Mar 21 12:23:19.329: INFO: Got endpoints: latency-svc-ch94t [2.141495792s] Mar 21 12:23:19.372: INFO: Created: latency-svc-7lm75 Mar 21 12:23:19.379: INFO: Got endpoints: latency-svc-7lm75 [2.125215307s] Mar 21 12:23:19.407: INFO: Created: latency-svc-xc5x6 Mar 21 12:23:19.502: INFO: Got endpoints: latency-svc-xc5x6 [2.181803963s] Mar 21 12:23:19.509: INFO: Created: latency-svc-qbqmp Mar 21 12:23:19.518: INFO: Got endpoints: latency-svc-qbqmp [2.03894436s] Mar 21 12:23:19.546: INFO: Created: latency-svc-lh47n Mar 21 12:23:19.554: INFO: Got endpoints: latency-svc-lh47n [2.065787565s] Mar 21 12:23:19.576: INFO: Created: latency-svc-jzgtj Mar 21 12:23:19.584: INFO: Got endpoints: latency-svc-jzgtj [1.891225017s] Mar 21 12:23:19.635: INFO: Created: latency-svc-kv7tr Mar 21 12:23:19.647: INFO: Got endpoints: latency-svc-kv7tr [1.899963397s] Mar 21 12:23:19.678: INFO: Created: latency-svc-s6qx4 Mar 21 12:23:19.693: INFO: Got endpoints: latency-svc-s6qx4 [1.831590615s] Mar 21 12:23:19.714: INFO: Created: latency-svc-68txx Mar 21 12:23:19.729: INFO: Got endpoints: latency-svc-68txx [1.856467646s] Mar 21 12:23:19.778: INFO: Created: latency-svc-fbj2d Mar 21 12:23:19.784: INFO: Got endpoints: latency-svc-fbj2d [1.880157688s] Mar 21 12:23:19.804: INFO: Created: latency-svc-l7jcc Mar 21 12:23:19.814: INFO: Got endpoints: latency-svc-l7jcc [1.880100616s] Mar 21 12:23:19.852: INFO: Created: latency-svc-jfz7s Mar 21 12:23:19.868: INFO: Got endpoints: latency-svc-jfz7s [1.823596794s] Mar 21 12:23:19.922: INFO: Created: latency-svc-mnblc Mar 21 12:23:19.947: INFO: Got endpoints: latency-svc-mnblc [1.856518761s] Mar 21 12:23:19.948: INFO: Created: latency-svc-fwwzb Mar 21 12:23:19.971: INFO: Got endpoints: latency-svc-fwwzb [1.792178674s] Mar 21 12:23:20.072: INFO: Created: latency-svc-mq4dx Mar 21 12:23:20.075: INFO: Got endpoints: latency-svc-mq4dx [1.290187811s] Mar 21 12:23:20.098: INFO: Created: latency-svc-zc8hf Mar 21 12:23:20.109: INFO: Got endpoints: latency-svc-zc8hf [780.170929ms] Mar 21 12:23:20.127: INFO: Created: latency-svc-d84dc Mar 21 12:23:20.140: INFO: Got endpoints: latency-svc-d84dc [760.695832ms] Mar 21 12:23:20.158: INFO: Created: latency-svc-dcvfj Mar 21 12:23:20.251: INFO: Got endpoints: latency-svc-dcvfj [748.788201ms] Mar 21 12:23:20.253: INFO: Created: latency-svc-d4p7c Mar 21 12:23:20.260: INFO: Got endpoints: latency-svc-d4p7c [742.430607ms] Mar 21 12:23:20.278: INFO: Created: latency-svc-l2nw2 Mar 21 12:23:20.291: INFO: Got endpoints: latency-svc-l2nw2 [736.365061ms] Mar 21 12:23:20.308: INFO: Created: latency-svc-rw28f Mar 21 12:23:20.321: INFO: Got endpoints: latency-svc-rw28f [736.508998ms] Mar 21 12:23:20.337: INFO: Created: latency-svc-hz22q Mar 21 12:23:20.406: INFO: Got endpoints: latency-svc-hz22q [759.591241ms] Mar 21 12:23:20.408: INFO: Created: latency-svc-wfk6d Mar 21 12:23:20.417: INFO: Got endpoints: latency-svc-wfk6d [723.922718ms] Mar 21 12:23:20.434: INFO: Created: latency-svc-t56t2 Mar 21 12:23:20.448: INFO: Got endpoints: latency-svc-t56t2 [718.204694ms] Mar 21 12:23:20.470: INFO: Created: latency-svc-5v8j7 Mar 21 12:23:20.479: INFO: Got endpoints: latency-svc-5v8j7 [694.910287ms] Mar 21 12:23:20.556: INFO: Created: latency-svc-sxh6h Mar 21 12:23:20.559: INFO: Got endpoints: latency-svc-sxh6h [744.321533ms] Mar 21 12:23:20.589: INFO: Created: latency-svc-7z6qg Mar 21 12:23:20.605: INFO: Got endpoints: latency-svc-7z6qg [736.597289ms] Mar 21 12:23:20.626: INFO: Created: latency-svc-c8ht6 Mar 21 12:23:20.635: INFO: Got endpoints: latency-svc-c8ht6 [687.985556ms] Mar 21 12:23:20.655: INFO: Created: latency-svc-xcts9 Mar 21 12:23:20.712: INFO: Got endpoints: latency-svc-xcts9 [740.547381ms] Mar 21 12:23:20.746: INFO: Created: latency-svc-jj4dg Mar 21 12:23:20.804: INFO: Got endpoints: latency-svc-jj4dg [729.348573ms] Mar 21 12:23:20.874: INFO: Created: latency-svc-8dfqp Mar 21 12:23:20.888: INFO: Got endpoints: latency-svc-8dfqp [778.652722ms] Mar 21 12:23:20.907: INFO: Created: latency-svc-xw7dg Mar 21 12:23:20.930: INFO: Got endpoints: latency-svc-xw7dg [790.24563ms] Mar 21 12:23:20.950: INFO: Created: latency-svc-bb77f Mar 21 12:23:21.047: INFO: Got endpoints: latency-svc-bb77f [796.009916ms] Mar 21 12:23:21.049: INFO: Created: latency-svc-mzqsk Mar 21 12:23:21.056: INFO: Got endpoints: latency-svc-mzqsk [796.079015ms] Mar 21 12:23:21.075: INFO: Created: latency-svc-7ghhh Mar 21 12:23:21.087: INFO: Got endpoints: latency-svc-7ghhh [796.055862ms] Mar 21 12:23:21.106: INFO: Created: latency-svc-9g6rr Mar 21 12:23:21.117: INFO: Got endpoints: latency-svc-9g6rr [796.428537ms] Mar 21 12:23:21.135: INFO: Created: latency-svc-dm6p4 Mar 21 12:23:21.191: INFO: Got endpoints: latency-svc-dm6p4 [784.377663ms] Mar 21 12:23:21.192: INFO: Created: latency-svc-ptvkb Mar 21 12:23:21.195: INFO: Got endpoints: latency-svc-ptvkb [778.246205ms] Mar 21 12:23:21.220: INFO: Created: latency-svc-mntqt Mar 21 12:23:21.232: INFO: Got endpoints: latency-svc-mntqt [784.183163ms] Mar 21 12:23:21.250: INFO: Created: latency-svc-nsmlj Mar 21 12:23:21.280: INFO: Created: latency-svc-lhz9x Mar 21 12:23:21.352: INFO: Created: latency-svc-kf2nd Mar 21 12:23:21.352: INFO: Got endpoints: latency-svc-nsmlj [873.301474ms] Mar 21 12:23:21.365: INFO: Got endpoints: latency-svc-kf2nd [760.38821ms] Mar 21 12:23:21.394: INFO: Got endpoints: latency-svc-lhz9x [835.150718ms] Mar 21 12:23:21.394: INFO: Created: latency-svc-z6crf Mar 21 12:23:21.408: INFO: Got endpoints: latency-svc-z6crf [772.330211ms] Mar 21 12:23:21.485: INFO: Created: latency-svc-ggtbt Mar 21 12:23:21.487: INFO: Got endpoints: latency-svc-ggtbt [775.353838ms] Mar 21 12:23:21.513: INFO: Created: latency-svc-nw7tg Mar 21 12:23:21.528: INFO: Got endpoints: latency-svc-nw7tg [723.818964ms] Mar 21 12:23:21.549: INFO: Created: latency-svc-mdg82 Mar 21 12:23:21.564: INFO: Got endpoints: latency-svc-mdg82 [676.137503ms] Mar 21 12:23:21.622: INFO: Created: latency-svc-8fsft Mar 21 12:23:21.624: INFO: Got endpoints: latency-svc-8fsft [694.285591ms] Mar 21 12:23:21.651: INFO: Created: latency-svc-8lqj8 Mar 21 12:23:21.667: INFO: Got endpoints: latency-svc-8lqj8 [620.038944ms] Mar 21 12:23:21.688: INFO: Created: latency-svc-bvn9t Mar 21 12:23:21.697: INFO: Got endpoints: latency-svc-bvn9t [641.054231ms] Mar 21 12:23:21.718: INFO: Created: latency-svc-mnq2k Mar 21 12:23:21.772: INFO: Got endpoints: latency-svc-mnq2k [684.818535ms] Mar 21 12:23:21.774: INFO: Created: latency-svc-l85cv Mar 21 12:23:21.782: INFO: Got endpoints: latency-svc-l85cv [664.857127ms] Mar 21 12:23:21.807: INFO: Created: latency-svc-zssz8 Mar 21 12:23:21.818: INFO: Got endpoints: latency-svc-zssz8 [627.470231ms] Mar 21 12:23:21.838: INFO: Created: latency-svc-sjpxt Mar 21 12:23:21.848: INFO: Got endpoints: latency-svc-sjpxt [652.913871ms] Mar 21 12:23:21.940: INFO: Created: latency-svc-nfvtx Mar 21 12:23:21.953: INFO: Got endpoints: latency-svc-nfvtx [720.579834ms] Mar 21 12:23:21.969: INFO: Created: latency-svc-tbphh Mar 21 12:23:21.983: INFO: Got endpoints: latency-svc-tbphh [630.749594ms] Mar 21 12:23:22.000: INFO: Created: latency-svc-dksfv Mar 21 12:23:22.025: INFO: Got endpoints: latency-svc-dksfv [659.610382ms] Mar 21 12:23:22.077: INFO: Created: latency-svc-lpv67 Mar 21 12:23:22.085: INFO: Got endpoints: latency-svc-lpv67 [691.262043ms] Mar 21 12:23:22.137: INFO: Created: latency-svc-n7vpw Mar 21 12:23:22.146: INFO: Got endpoints: latency-svc-n7vpw [738.022206ms] Mar 21 12:23:22.233: INFO: Created: latency-svc-l6tzv Mar 21 12:23:22.235: INFO: Got endpoints: latency-svc-l6tzv [747.926498ms] Mar 21 12:23:22.299: INFO: Created: latency-svc-t4gx4 Mar 21 12:23:22.314: INFO: Got endpoints: latency-svc-t4gx4 [785.917072ms] Mar 21 12:23:22.395: INFO: Created: latency-svc-qzs89 Mar 21 12:23:22.413: INFO: Got endpoints: latency-svc-qzs89 [848.929816ms] Mar 21 12:23:22.443: INFO: Created: latency-svc-kdb9m Mar 21 12:23:22.458: INFO: Got endpoints: latency-svc-kdb9m [833.88798ms] Mar 21 12:23:22.479: INFO: Created: latency-svc-k6s8j Mar 21 12:23:22.488: INFO: Got endpoints: latency-svc-k6s8j [821.358429ms] Mar 21 12:23:22.539: INFO: Created: latency-svc-47m8x Mar 21 12:23:22.542: INFO: Got endpoints: latency-svc-47m8x [845.038849ms] Mar 21 12:23:22.564: INFO: Created: latency-svc-ps2sk Mar 21 12:23:22.593: INFO: Got endpoints: latency-svc-ps2sk [821.512294ms] Mar 21 12:23:22.624: INFO: Created: latency-svc-rc4mz Mar 21 12:23:22.688: INFO: Got endpoints: latency-svc-rc4mz [905.786183ms] Mar 21 12:23:22.689: INFO: Created: latency-svc-dj488 Mar 21 12:23:22.693: INFO: Got endpoints: latency-svc-dj488 [875.226219ms] Mar 21 12:23:22.719: INFO: Created: latency-svc-6qd68 Mar 21 12:23:22.730: INFO: Got endpoints: latency-svc-6qd68 [881.208725ms] Mar 21 12:23:22.750: INFO: Created: latency-svc-2cwmn Mar 21 12:23:22.760: INFO: Got endpoints: latency-svc-2cwmn [807.435367ms] Mar 21 12:23:22.779: INFO: Created: latency-svc-bkl7n Mar 21 12:23:22.862: INFO: Got endpoints: latency-svc-bkl7n [878.946908ms] Mar 21 12:23:22.864: INFO: Created: latency-svc-mxjx6 Mar 21 12:23:22.874: INFO: Got endpoints: latency-svc-mxjx6 [849.325834ms] Mar 21 12:23:22.894: INFO: Created: latency-svc-8gjqn Mar 21 12:23:22.911: INFO: Got endpoints: latency-svc-8gjqn [826.002695ms] Mar 21 12:23:22.930: INFO: Created: latency-svc-nks97 Mar 21 12:23:22.941: INFO: Got endpoints: latency-svc-nks97 [795.769123ms] Mar 21 12:23:23.006: INFO: Created: latency-svc-xm7kl Mar 21 12:23:23.008: INFO: Got endpoints: latency-svc-xm7kl [772.952408ms] Mar 21 12:23:23.008: INFO: Latencies: [38.473363ms 68.693782ms 109.733831ms 114.162187ms 134.877963ms 171.826941ms 201.246303ms 257.247639ms 303.762267ms 333.366163ms 399.701893ms 436.493565ms 472.633762ms 533.216968ms 602.912954ms 605.19699ms 613.765024ms 620.038944ms 627.470231ms 630.749594ms 634.834067ms 638.315372ms 638.93906ms 640.069687ms 641.054231ms 641.223828ms 646.633905ms 646.854866ms 652.913871ms 653.868572ms 658.25867ms 659.610382ms 663.316954ms 664.857127ms 668.924412ms 675.153181ms 676.137503ms 681.815568ms 684.818535ms 686.754158ms 687.985556ms 691.262043ms 694.128738ms 694.254861ms 694.285591ms 694.910287ms 696.078014ms 699.284482ms 701.57854ms 706.038465ms 706.136962ms 708.347087ms 711.914823ms 713.192146ms 717.536485ms 717.776705ms 718.204694ms 718.475566ms 718.979031ms 720.579834ms 721.840323ms 722.997855ms 723.818964ms 723.922718ms 724.20538ms 724.916836ms 728.093471ms 729.009217ms 729.348573ms 733.783086ms 734.072065ms 735.832336ms 736.365061ms 736.508998ms 736.597289ms 738.022206ms 740.547381ms 742.430607ms 742.450841ms 744.321533ms 745.321194ms 747.667351ms 747.926498ms 748.265969ms 748.623577ms 748.669646ms 748.788201ms 753.66815ms 754.044973ms 754.877651ms 758.459328ms 759.591241ms 759.636833ms 760.38821ms 760.485395ms 760.491461ms 760.695832ms 761.831552ms 764.717375ms 765.315441ms 766.066865ms 766.071182ms 766.287763ms 766.62583ms 766.640388ms 769.802033ms 772.330211ms 772.566335ms 772.952408ms 775.353838ms 776.104049ms 776.204268ms 778.246205ms 778.652722ms 780.170929ms 783.614232ms 783.697204ms 784.183163ms 784.34627ms 784.376451ms 784.377663ms 785.396562ms 785.809024ms 785.917072ms 789.636045ms 789.894163ms 790.231881ms 790.24563ms 791.093892ms 795.769123ms 796.009916ms 796.055862ms 796.079015ms 796.399894ms 796.428537ms 798.386969ms 801.468865ms 803.096492ms 803.559299ms 807.435367ms 809.081605ms 810.860515ms 813.632215ms 817.673795ms 821.358429ms 821.512294ms 825.903916ms 826.002695ms 833.88798ms 835.150718ms 845.038849ms 848.929816ms 849.325834ms 852.860732ms 873.301474ms 875.226219ms 878.946908ms 881.208725ms 905.786183ms 952.460983ms 967.422188ms 975.937754ms 993.641553ms 995.39536ms 1.030570786s 1.031767166s 1.032638699s 1.041918067s 1.06345933s 1.107024478s 1.262321778s 1.290187811s 1.302787355s 1.307466692s 1.316336863s 1.317798429s 1.319355927s 1.321599273s 1.323881948s 1.329678311s 1.334194969s 1.341949453s 1.353744447s 1.379558966s 1.388824468s 1.633394331s 1.792178674s 1.823596794s 1.831590615s 1.856467646s 1.856518761s 1.880100616s 1.880157688s 1.891225017s 1.899963397s 2.03894436s 2.065787565s 2.125215307s 2.141495792s 2.181803963s] Mar 21 12:23:23.008: INFO: 50 %ile: 766.066865ms Mar 21 12:23:23.008: INFO: 90 %ile: 1.334194969s Mar 21 12:23:23.008: INFO: 99 %ile: 2.141495792s Mar 21 12:23:23.008: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:23:23.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-t89cg" for this suite. Mar 21 12:23:47.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:23:47.053: INFO: namespace: e2e-tests-svc-latency-t89cg, resource: bindings, ignored listing per whitelist Mar 21 12:23:47.112: INFO: namespace e2e-tests-svc-latency-t89cg deletion completed in 24.091890729s • [SLOW TEST:39.214 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:23:47.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 21 12:23:57.262: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.262: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.301973 6 log.go:172] (0xc0009f1810) (0xc002579720) Create stream I0321 12:23:57.302013 6 log.go:172] (0xc0009f1810) (0xc002579720) Stream added, broadcasting: 1 I0321 12:23:57.303786 6 log.go:172] (0xc0009f1810) Reply frame received for 1 I0321 12:23:57.303832 6 log.go:172] (0xc0009f1810) (0xc00142f860) Create stream I0321 12:23:57.303848 6 log.go:172] (0xc0009f1810) (0xc00142f860) Stream added, broadcasting: 3 I0321 12:23:57.305019 6 log.go:172] (0xc0009f1810) Reply frame received for 3 I0321 12:23:57.305063 6 log.go:172] (0xc0009f1810) (0xc00237ac80) Create stream I0321 12:23:57.305079 6 log.go:172] (0xc0009f1810) (0xc00237ac80) Stream added, broadcasting: 5 I0321 12:23:57.306444 6 log.go:172] (0xc0009f1810) Reply frame received for 5 I0321 12:23:57.397029 6 log.go:172] (0xc0009f1810) Data frame received for 5 I0321 12:23:57.397055 6 log.go:172] (0xc00237ac80) (5) Data frame handling I0321 12:23:57.397075 6 log.go:172] (0xc0009f1810) Data frame received for 3 I0321 12:23:57.397081 6 log.go:172] (0xc00142f860) (3) Data frame handling I0321 12:23:57.397088 6 log.go:172] (0xc00142f860) (3) Data frame sent I0321 12:23:57.397095 6 log.go:172] (0xc0009f1810) Data frame received for 3 I0321 12:23:57.397102 6 log.go:172] (0xc00142f860) (3) Data frame handling I0321 12:23:57.398825 6 log.go:172] (0xc0009f1810) Data frame received for 1 I0321 12:23:57.398849 6 log.go:172] (0xc002579720) (1) Data frame handling I0321 12:23:57.398866 6 log.go:172] (0xc002579720) (1) Data frame sent I0321 12:23:57.398875 6 log.go:172] (0xc0009f1810) (0xc002579720) Stream removed, broadcasting: 1 I0321 12:23:57.398937 6 log.go:172] (0xc0009f1810) (0xc002579720) Stream removed, broadcasting: 1 I0321 12:23:57.398955 6 log.go:172] (0xc0009f1810) (0xc00142f860) Stream removed, broadcasting: 3 I0321 12:23:57.398974 6 log.go:172] (0xc0009f1810) Go away received I0321 12:23:57.399008 6 log.go:172] (0xc0009f1810) (0xc00237ac80) Stream removed, broadcasting: 5 Mar 21 12:23:57.399: INFO: Exec stderr: "" Mar 21 12:23:57.399: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.399: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.425712 6 log.go:172] (0xc0021d42c0) (0xc00237af00) Create stream I0321 12:23:57.425737 6 log.go:172] (0xc0021d42c0) (0xc00237af00) Stream added, broadcasting: 1 I0321 12:23:57.427411 6 log.go:172] (0xc0021d42c0) Reply frame received for 1 I0321 12:23:57.427437 6 log.go:172] (0xc0021d42c0) (0xc0025797c0) Create stream I0321 12:23:57.427445 6 log.go:172] (0xc0021d42c0) (0xc0025797c0) Stream added, broadcasting: 3 I0321 12:23:57.428140 6 log.go:172] (0xc0021d42c0) Reply frame received for 3 I0321 12:23:57.428186 6 log.go:172] (0xc0021d42c0) (0xc002579860) Create stream I0321 12:23:57.428200 6 log.go:172] (0xc0021d42c0) (0xc002579860) Stream added, broadcasting: 5 I0321 12:23:57.428879 6 log.go:172] (0xc0021d42c0) Reply frame received for 5 I0321 12:23:57.504883 6 log.go:172] (0xc0021d42c0) Data frame received for 5 I0321 12:23:57.504925 6 log.go:172] (0xc002579860) (5) Data frame handling I0321 12:23:57.504962 6 log.go:172] (0xc0021d42c0) Data frame received for 3 I0321 12:23:57.504989 6 log.go:172] (0xc0025797c0) (3) Data frame handling I0321 12:23:57.504998 6 log.go:172] (0xc0025797c0) (3) Data frame sent I0321 12:23:57.505007 6 log.go:172] (0xc0021d42c0) Data frame received for 3 I0321 12:23:57.505019 6 log.go:172] (0xc0025797c0) (3) Data frame handling I0321 12:23:57.506650 6 log.go:172] (0xc0021d42c0) Data frame received for 1 I0321 12:23:57.506669 6 log.go:172] (0xc00237af00) (1) Data frame handling I0321 12:23:57.506681 6 log.go:172] (0xc00237af00) (1) Data frame sent I0321 12:23:57.506811 6 log.go:172] (0xc0021d42c0) (0xc00237af00) Stream removed, broadcasting: 1 I0321 12:23:57.506875 6 log.go:172] (0xc0021d42c0) Go away received I0321 12:23:57.507002 6 log.go:172] (0xc0021d42c0) (0xc00237af00) Stream removed, broadcasting: 1 I0321 12:23:57.507035 6 log.go:172] (0xc0021d42c0) (0xc0025797c0) Stream removed, broadcasting: 3 I0321 12:23:57.507065 6 log.go:172] (0xc0021d42c0) (0xc002579860) Stream removed, broadcasting: 5 Mar 21 12:23:57.507: INFO: Exec stderr: "" Mar 21 12:23:57.507: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.507: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.544460 6 log.go:172] (0xc0015ac580) (0xc000ded360) Create stream I0321 12:23:57.544485 6 log.go:172] (0xc0015ac580) (0xc000ded360) Stream added, broadcasting: 1 I0321 12:23:57.548265 6 log.go:172] (0xc0015ac580) Reply frame received for 1 I0321 12:23:57.548333 6 log.go:172] (0xc0015ac580) (0xc00142f9a0) Create stream I0321 12:23:57.548363 6 log.go:172] (0xc0015ac580) (0xc00142f9a0) Stream added, broadcasting: 3 I0321 12:23:57.549523 6 log.go:172] (0xc0015ac580) Reply frame received for 3 I0321 12:23:57.549572 6 log.go:172] (0xc0015ac580) (0xc002579900) Create stream I0321 12:23:57.549587 6 log.go:172] (0xc0015ac580) (0xc002579900) Stream added, broadcasting: 5 I0321 12:23:57.550523 6 log.go:172] (0xc0015ac580) Reply frame received for 5 I0321 12:23:57.599590 6 log.go:172] (0xc0015ac580) Data frame received for 3 I0321 12:23:57.599647 6 log.go:172] (0xc00142f9a0) (3) Data frame handling I0321 12:23:57.599674 6 log.go:172] (0xc00142f9a0) (3) Data frame sent I0321 12:23:57.599695 6 log.go:172] (0xc0015ac580) Data frame received for 3 I0321 12:23:57.599715 6 log.go:172] (0xc00142f9a0) (3) Data frame handling I0321 12:23:57.599763 6 log.go:172] (0xc0015ac580) Data frame received for 5 I0321 12:23:57.599819 6 log.go:172] (0xc002579900) (5) Data frame handling I0321 12:23:57.601657 6 log.go:172] (0xc0015ac580) Data frame received for 1 I0321 12:23:57.601694 6 log.go:172] (0xc000ded360) (1) Data frame handling I0321 12:23:57.601717 6 log.go:172] (0xc000ded360) (1) Data frame sent I0321 12:23:57.601739 6 log.go:172] (0xc0015ac580) (0xc000ded360) Stream removed, broadcasting: 1 I0321 12:23:57.601786 6 log.go:172] (0xc0015ac580) Go away received I0321 12:23:57.601856 6 log.go:172] (0xc0015ac580) (0xc000ded360) Stream removed, broadcasting: 1 I0321 12:23:57.601884 6 log.go:172] (0xc0015ac580) (0xc00142f9a0) Stream removed, broadcasting: 3 I0321 12:23:57.601902 6 log.go:172] (0xc0015ac580) (0xc002579900) Stream removed, broadcasting: 5 Mar 21 12:23:57.601: INFO: Exec stderr: "" Mar 21 12:23:57.601: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.602: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.635397 6 log.go:172] (0xc0015aca50) (0xc000ded680) Create stream I0321 12:23:57.635429 6 log.go:172] (0xc0015aca50) (0xc000ded680) Stream added, broadcasting: 1 I0321 12:23:57.639575 6 log.go:172] (0xc0015aca50) Reply frame received for 1 I0321 12:23:57.639607 6 log.go:172] (0xc0015aca50) (0xc00142fa40) Create stream I0321 12:23:57.639619 6 log.go:172] (0xc0015aca50) (0xc00142fa40) Stream added, broadcasting: 3 I0321 12:23:57.642001 6 log.go:172] (0xc0015aca50) Reply frame received for 3 I0321 12:23:57.642037 6 log.go:172] (0xc0015aca50) (0xc0025799a0) Create stream I0321 12:23:57.642052 6 log.go:172] (0xc0015aca50) (0xc0025799a0) Stream added, broadcasting: 5 I0321 12:23:57.643111 6 log.go:172] (0xc0015aca50) Reply frame received for 5 I0321 12:23:57.698884 6 log.go:172] (0xc0015aca50) Data frame received for 5 I0321 12:23:57.698934 6 log.go:172] (0xc0025799a0) (5) Data frame handling I0321 12:23:57.698976 6 log.go:172] (0xc0015aca50) Data frame received for 3 I0321 12:23:57.699009 6 log.go:172] (0xc00142fa40) (3) Data frame handling I0321 12:23:57.699051 6 log.go:172] (0xc00142fa40) (3) Data frame sent I0321 12:23:57.699076 6 log.go:172] (0xc0015aca50) Data frame received for 3 I0321 12:23:57.699099 6 log.go:172] (0xc00142fa40) (3) Data frame handling I0321 12:23:57.700718 6 log.go:172] (0xc0015aca50) Data frame received for 1 I0321 12:23:57.700759 6 log.go:172] (0xc000ded680) (1) Data frame handling I0321 12:23:57.700803 6 log.go:172] (0xc000ded680) (1) Data frame sent I0321 12:23:57.700836 6 log.go:172] (0xc0015aca50) (0xc000ded680) Stream removed, broadcasting: 1 I0321 12:23:57.700867 6 log.go:172] (0xc0015aca50) Go away received I0321 12:23:57.701346 6 log.go:172] (0xc0015aca50) (0xc000ded680) Stream removed, broadcasting: 1 I0321 12:23:57.701377 6 log.go:172] (0xc0015aca50) (0xc00142fa40) Stream removed, broadcasting: 3 I0321 12:23:57.701388 6 log.go:172] (0xc0015aca50) (0xc0025799a0) Stream removed, broadcasting: 5 Mar 21 12:23:57.701: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 21 12:23:57.701: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.701: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.730939 6 log.go:172] (0xc001d4a2c0) (0xc00142fcc0) Create stream I0321 12:23:57.730961 6 log.go:172] (0xc001d4a2c0) (0xc00142fcc0) Stream added, broadcasting: 1 I0321 12:23:57.732775 6 log.go:172] (0xc001d4a2c0) Reply frame received for 1 I0321 12:23:57.732865 6 log.go:172] (0xc001d4a2c0) (0xc002579b80) Create stream I0321 12:23:57.732889 6 log.go:172] (0xc001d4a2c0) (0xc002579b80) Stream added, broadcasting: 3 I0321 12:23:57.734296 6 log.go:172] (0xc001d4a2c0) Reply frame received for 3 I0321 12:23:57.734337 6 log.go:172] (0xc001d4a2c0) (0xc002579c20) Create stream I0321 12:23:57.734351 6 log.go:172] (0xc001d4a2c0) (0xc002579c20) Stream added, broadcasting: 5 I0321 12:23:57.735472 6 log.go:172] (0xc001d4a2c0) Reply frame received for 5 I0321 12:23:57.795148 6 log.go:172] (0xc001d4a2c0) Data frame received for 5 I0321 12:23:57.795187 6 log.go:172] (0xc002579c20) (5) Data frame handling I0321 12:23:57.795214 6 log.go:172] (0xc001d4a2c0) Data frame received for 3 I0321 12:23:57.795244 6 log.go:172] (0xc002579b80) (3) Data frame handling I0321 12:23:57.795279 6 log.go:172] (0xc002579b80) (3) Data frame sent I0321 12:23:57.795299 6 log.go:172] (0xc001d4a2c0) Data frame received for 3 I0321 12:23:57.795318 6 log.go:172] (0xc002579b80) (3) Data frame handling I0321 12:23:57.796986 6 log.go:172] (0xc001d4a2c0) Data frame received for 1 I0321 12:23:57.797031 6 log.go:172] (0xc00142fcc0) (1) Data frame handling I0321 12:23:57.797052 6 log.go:172] (0xc00142fcc0) (1) Data frame sent I0321 12:23:57.797071 6 log.go:172] (0xc001d4a2c0) (0xc00142fcc0) Stream removed, broadcasting: 1 I0321 12:23:57.797280 6 log.go:172] (0xc001d4a2c0) Go away received I0321 12:23:57.797325 6 log.go:172] (0xc001d4a2c0) (0xc00142fcc0) Stream removed, broadcasting: 1 I0321 12:23:57.797351 6 log.go:172] (0xc001d4a2c0) (0xc002579b80) Stream removed, broadcasting: 3 I0321 12:23:57.797364 6 log.go:172] (0xc001d4a2c0) (0xc002579c20) Stream removed, broadcasting: 5 Mar 21 12:23:57.797: INFO: Exec stderr: "" Mar 21 12:23:57.797: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.797: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.828678 6 log.go:172] (0xc001d4a790) (0xc00142ff40) Create stream I0321 12:23:57.828708 6 log.go:172] (0xc001d4a790) (0xc00142ff40) Stream added, broadcasting: 1 I0321 12:23:57.831113 6 log.go:172] (0xc001d4a790) Reply frame received for 1 I0321 12:23:57.831150 6 log.go:172] (0xc001d4a790) (0xc000ded720) Create stream I0321 12:23:57.831163 6 log.go:172] (0xc001d4a790) (0xc000ded720) Stream added, broadcasting: 3 I0321 12:23:57.832075 6 log.go:172] (0xc001d4a790) Reply frame received for 3 I0321 12:23:57.832126 6 log.go:172] (0xc001d4a790) (0xc000ded860) Create stream I0321 12:23:57.832140 6 log.go:172] (0xc001d4a790) (0xc000ded860) Stream added, broadcasting: 5 I0321 12:23:57.833056 6 log.go:172] (0xc001d4a790) Reply frame received for 5 I0321 12:23:57.901429 6 log.go:172] (0xc001d4a790) Data frame received for 5 I0321 12:23:57.901467 6 log.go:172] (0xc000ded860) (5) Data frame handling I0321 12:23:57.901533 6 log.go:172] (0xc001d4a790) Data frame received for 3 I0321 12:23:57.901582 6 log.go:172] (0xc000ded720) (3) Data frame handling I0321 12:23:57.901613 6 log.go:172] (0xc000ded720) (3) Data frame sent I0321 12:23:57.901634 6 log.go:172] (0xc001d4a790) Data frame received for 3 I0321 12:23:57.901653 6 log.go:172] (0xc000ded720) (3) Data frame handling I0321 12:23:57.903546 6 log.go:172] (0xc001d4a790) Data frame received for 1 I0321 12:23:57.903591 6 log.go:172] (0xc00142ff40) (1) Data frame handling I0321 12:23:57.903621 6 log.go:172] (0xc00142ff40) (1) Data frame sent I0321 12:23:57.903644 6 log.go:172] (0xc001d4a790) (0xc00142ff40) Stream removed, broadcasting: 1 I0321 12:23:57.903671 6 log.go:172] (0xc001d4a790) Go away received I0321 12:23:57.903838 6 log.go:172] (0xc001d4a790) (0xc00142ff40) Stream removed, broadcasting: 1 I0321 12:23:57.903917 6 log.go:172] (0xc001d4a790) (0xc000ded720) Stream removed, broadcasting: 3 I0321 12:23:57.903947 6 log.go:172] (0xc001d4a790) (0xc000ded860) Stream removed, broadcasting: 5 Mar 21 12:23:57.903: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 21 12:23:57.904: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:57.904: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:57.954832 6 log.go:172] (0xc001d4ac60) (0xc002262280) Create stream I0321 12:23:57.954864 6 log.go:172] (0xc001d4ac60) (0xc002262280) Stream added, broadcasting: 1 I0321 12:23:57.956708 6 log.go:172] (0xc001d4ac60) Reply frame received for 1 I0321 12:23:57.956760 6 log.go:172] (0xc001d4ac60) (0xc002262320) Create stream I0321 12:23:57.956771 6 log.go:172] (0xc001d4ac60) (0xc002262320) Stream added, broadcasting: 3 I0321 12:23:57.957646 6 log.go:172] (0xc001d4ac60) Reply frame received for 3 I0321 12:23:57.957675 6 log.go:172] (0xc001d4ac60) (0xc002579cc0) Create stream I0321 12:23:57.957683 6 log.go:172] (0xc001d4ac60) (0xc002579cc0) Stream added, broadcasting: 5 I0321 12:23:57.958293 6 log.go:172] (0xc001d4ac60) Reply frame received for 5 I0321 12:23:58.023982 6 log.go:172] (0xc001d4ac60) Data frame received for 5 I0321 12:23:58.024042 6 log.go:172] (0xc002579cc0) (5) Data frame handling I0321 12:23:58.024090 6 log.go:172] (0xc001d4ac60) Data frame received for 3 I0321 12:23:58.024116 6 log.go:172] (0xc002262320) (3) Data frame handling I0321 12:23:58.024156 6 log.go:172] (0xc002262320) (3) Data frame sent I0321 12:23:58.024182 6 log.go:172] (0xc001d4ac60) Data frame received for 3 I0321 12:23:58.024202 6 log.go:172] (0xc002262320) (3) Data frame handling I0321 12:23:58.026356 6 log.go:172] (0xc001d4ac60) Data frame received for 1 I0321 12:23:58.026412 6 log.go:172] (0xc002262280) (1) Data frame handling I0321 12:23:58.026449 6 log.go:172] (0xc002262280) (1) Data frame sent I0321 12:23:58.026468 6 log.go:172] (0xc001d4ac60) (0xc002262280) Stream removed, broadcasting: 1 I0321 12:23:58.026487 6 log.go:172] (0xc001d4ac60) Go away received I0321 12:23:58.026619 6 log.go:172] (0xc001d4ac60) (0xc002262280) Stream removed, broadcasting: 1 I0321 12:23:58.026648 6 log.go:172] (0xc001d4ac60) (0xc002262320) Stream removed, broadcasting: 3 I0321 12:23:58.026670 6 log.go:172] (0xc001d4ac60) (0xc002579cc0) Stream removed, broadcasting: 5 Mar 21 12:23:58.026: INFO: Exec stderr: "" Mar 21 12:23:58.026: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:58.026: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:58.080503 6 log.go:172] (0xc0009f1ce0) (0xc000dbc000) Create stream I0321 12:23:58.080540 6 log.go:172] (0xc0009f1ce0) (0xc000dbc000) Stream added, broadcasting: 1 I0321 12:23:58.091210 6 log.go:172] (0xc0009f1ce0) Reply frame received for 1 I0321 12:23:58.091260 6 log.go:172] (0xc0009f1ce0) (0xc001f42000) Create stream I0321 12:23:58.091273 6 log.go:172] (0xc0009f1ce0) (0xc001f42000) Stream added, broadcasting: 3 I0321 12:23:58.092173 6 log.go:172] (0xc0009f1ce0) Reply frame received for 3 I0321 12:23:58.092221 6 log.go:172] (0xc0009f1ce0) (0xc002578000) Create stream I0321 12:23:58.092234 6 log.go:172] (0xc0009f1ce0) (0xc002578000) Stream added, broadcasting: 5 I0321 12:23:58.093222 6 log.go:172] (0xc0009f1ce0) Reply frame received for 5 I0321 12:23:58.155528 6 log.go:172] (0xc0009f1ce0) Data frame received for 3 I0321 12:23:58.155562 6 log.go:172] (0xc001f42000) (3) Data frame handling I0321 12:23:58.155577 6 log.go:172] (0xc001f42000) (3) Data frame sent I0321 12:23:58.155587 6 log.go:172] (0xc0009f1ce0) Data frame received for 3 I0321 12:23:58.155613 6 log.go:172] (0xc001f42000) (3) Data frame handling I0321 12:23:58.156008 6 log.go:172] (0xc0009f1ce0) Data frame received for 5 I0321 12:23:58.156023 6 log.go:172] (0xc002578000) (5) Data frame handling I0321 12:23:58.160617 6 log.go:172] (0xc0009f1ce0) Data frame received for 1 I0321 12:23:58.160641 6 log.go:172] (0xc000dbc000) (1) Data frame handling I0321 12:23:58.160652 6 log.go:172] (0xc000dbc000) (1) Data frame sent I0321 12:23:58.160667 6 log.go:172] (0xc0009f1ce0) (0xc000dbc000) Stream removed, broadcasting: 1 I0321 12:23:58.160760 6 log.go:172] (0xc0009f1ce0) (0xc000dbc000) Stream removed, broadcasting: 1 I0321 12:23:58.160773 6 log.go:172] (0xc0009f1ce0) (0xc001f42000) Stream removed, broadcasting: 3 I0321 12:23:58.160782 6 log.go:172] (0xc0009f1ce0) (0xc002578000) Stream removed, broadcasting: 5 Mar 21 12:23:58.160: INFO: Exec stderr: "" Mar 21 12:23:58.160: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:58.160: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:58.162466 6 log.go:172] (0xc0009f1ce0) Go away received I0321 12:23:58.202787 6 log.go:172] (0xc0015ac160) (0xc002578280) Create stream I0321 12:23:58.202813 6 log.go:172] (0xc0015ac160) (0xc002578280) Stream added, broadcasting: 1 I0321 12:23:58.204669 6 log.go:172] (0xc0015ac160) Reply frame received for 1 I0321 12:23:58.204700 6 log.go:172] (0xc0015ac160) (0xc002578320) Create stream I0321 12:23:58.204711 6 log.go:172] (0xc0015ac160) (0xc002578320) Stream added, broadcasting: 3 I0321 12:23:58.205780 6 log.go:172] (0xc0015ac160) Reply frame received for 3 I0321 12:23:58.205819 6 log.go:172] (0xc0015ac160) (0xc0023b80a0) Create stream I0321 12:23:58.205833 6 log.go:172] (0xc0015ac160) (0xc0023b80a0) Stream added, broadcasting: 5 I0321 12:23:58.206799 6 log.go:172] (0xc0015ac160) Reply frame received for 5 I0321 12:23:58.257559 6 log.go:172] (0xc0015ac160) Data frame received for 5 I0321 12:23:58.257587 6 log.go:172] (0xc0023b80a0) (5) Data frame handling I0321 12:23:58.257605 6 log.go:172] (0xc0015ac160) Data frame received for 3 I0321 12:23:58.257612 6 log.go:172] (0xc002578320) (3) Data frame handling I0321 12:23:58.257621 6 log.go:172] (0xc002578320) (3) Data frame sent I0321 12:23:58.257626 6 log.go:172] (0xc0015ac160) Data frame received for 3 I0321 12:23:58.257640 6 log.go:172] (0xc002578320) (3) Data frame handling I0321 12:23:58.259063 6 log.go:172] (0xc0015ac160) Data frame received for 1 I0321 12:23:58.259109 6 log.go:172] (0xc002578280) (1) Data frame handling I0321 12:23:58.259142 6 log.go:172] (0xc002578280) (1) Data frame sent I0321 12:23:58.259158 6 log.go:172] (0xc0015ac160) (0xc002578280) Stream removed, broadcasting: 1 I0321 12:23:58.259177 6 log.go:172] (0xc0015ac160) Go away received I0321 12:23:58.259295 6 log.go:172] (0xc0015ac160) (0xc002578280) Stream removed, broadcasting: 1 I0321 12:23:58.259313 6 log.go:172] (0xc0015ac160) (0xc002578320) Stream removed, broadcasting: 3 I0321 12:23:58.259319 6 log.go:172] (0xc0015ac160) (0xc0023b80a0) Stream removed, broadcasting: 5 Mar 21 12:23:58.259: INFO: Exec stderr: "" Mar 21 12:23:58.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-lvxpp PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 21 12:23:58.259: INFO: >>> kubeConfig: /root/.kube/config I0321 12:23:58.293342 6 log.go:172] (0xc00033def0) (0xc002396320) Create stream I0321 12:23:58.293413 6 log.go:172] (0xc00033def0) (0xc002396320) Stream added, broadcasting: 1 I0321 12:23:58.295455 6 log.go:172] (0xc00033def0) Reply frame received for 1 I0321 12:23:58.295487 6 log.go:172] (0xc00033def0) (0xc0023b8140) Create stream I0321 12:23:58.295499 6 log.go:172] (0xc00033def0) (0xc0023b8140) Stream added, broadcasting: 3 I0321 12:23:58.296228 6 log.go:172] (0xc00033def0) Reply frame received for 3 I0321 12:23:58.296272 6 log.go:172] (0xc00033def0) (0xc0023963c0) Create stream I0321 12:23:58.296289 6 log.go:172] (0xc00033def0) (0xc0023963c0) Stream added, broadcasting: 5 I0321 12:23:58.297237 6 log.go:172] (0xc00033def0) Reply frame received for 5 I0321 12:23:58.364355 6 log.go:172] (0xc00033def0) Data frame received for 5 I0321 12:23:58.364431 6 log.go:172] (0xc0023963c0) (5) Data frame handling I0321 12:23:58.364549 6 log.go:172] (0xc00033def0) Data frame received for 3 I0321 12:23:58.364579 6 log.go:172] (0xc0023b8140) (3) Data frame handling I0321 12:23:58.364601 6 log.go:172] (0xc0023b8140) (3) Data frame sent I0321 12:23:58.364615 6 log.go:172] (0xc00033def0) Data frame received for 3 I0321 12:23:58.364628 6 log.go:172] (0xc0023b8140) (3) Data frame handling I0321 12:23:58.365837 6 log.go:172] (0xc00033def0) Data frame received for 1 I0321 12:23:58.365881 6 log.go:172] (0xc002396320) (1) Data frame handling I0321 12:23:58.365927 6 log.go:172] (0xc002396320) (1) Data frame sent I0321 12:23:58.365962 6 log.go:172] (0xc00033def0) (0xc002396320) Stream removed, broadcasting: 1 I0321 12:23:58.365986 6 log.go:172] (0xc00033def0) Go away received I0321 12:23:58.366098 6 log.go:172] (0xc00033def0) (0xc002396320) Stream removed, broadcasting: 1 I0321 12:23:58.366119 6 log.go:172] (0xc00033def0) (0xc0023b8140) Stream removed, broadcasting: 3 I0321 12:23:58.366147 6 log.go:172] (0xc00033def0) (0xc0023963c0) Stream removed, broadcasting: 5 Mar 21 12:23:58.366: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:23:58.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-lvxpp" for this suite. Mar 21 12:24:36.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:24:36.519: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-lvxpp, resource: bindings, ignored listing per whitelist Mar 21 12:24:36.554: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-lvxpp deletion completed in 38.184230637s • [SLOW TEST:49.442 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:24:36.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-edabb0de-6b6e-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 12:24:36.661: INFO: Waiting up to 5m0s for pod "pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-rdwzp" to be "success or failure" Mar 21 12:24:36.665: INFO: Pod "pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796124ms Mar 21 12:24:38.669: INFO: Pod "pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008034911s Mar 21 12:24:40.673: INFO: Pod "pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012353169s STEP: Saw pod success Mar 21 12:24:40.673: INFO: Pod "pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:24:40.676: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 12:24:40.711: INFO: Waiting for pod pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f to disappear Mar 21 12:24:40.725: INFO: Pod pod-secrets-edaeb47e-6b6e-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:24:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rdwzp" for this suite. Mar 21 12:24:46.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:24:46.762: INFO: namespace: e2e-tests-secrets-rdwzp, resource: bindings, ignored listing per whitelist Mar 21 12:24:46.817: INFO: namespace e2e-tests-secrets-rdwzp deletion completed in 6.087623823s • [SLOW TEST:10.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:24:46.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 21 12:24:46.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 21 12:24:47.087: INFO: stderr: "" Mar 21 12:24:47.087: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:24:47.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8zgjx" for this suite. Mar 21 12:24:53.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:24:53.170: INFO: namespace: e2e-tests-kubectl-8zgjx, resource: bindings, ignored listing per whitelist Mar 21 12:24:53.186: INFO: namespace e2e-tests-kubectl-8zgjx deletion completed in 6.09437713s • [SLOW TEST:6.369 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:24:53.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 21 12:24:57.885: INFO: Successfully updated pod "annotationupdatef79f687c-6b6e-11ea-946c-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:24:59.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jtt96" for this suite. Mar 21 12:25:21.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:25:21.964: INFO: namespace: e2e-tests-projected-jtt96, resource: bindings, ignored listing per whitelist Mar 21 12:25:21.998: INFO: namespace e2e-tests-projected-jtt96 deletion completed in 22.094375669s • [SLOW TEST:28.811 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:25:21.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:25:22.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-xfkcn" to be "success or failure" Mar 21 12:25:22.127: INFO: Pod "downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.542187ms Mar 21 12:25:24.131: INFO: Pod "downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031989461s Mar 21 12:25:26.135: INFO: Pod "downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035748694s STEP: Saw pod success Mar 21 12:25:26.135: INFO: Pod "downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:25:26.138: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:25:26.159: INFO: Waiting for pod downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:25:26.163: INFO: Pod downwardapi-volume-08c4dcf9-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:25:26.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xfkcn" for this suite. Mar 21 12:25:32.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:25:32.191: INFO: namespace: e2e-tests-downward-api-xfkcn, resource: bindings, ignored listing per whitelist Mar 21 12:25:32.284: INFO: namespace e2e-tests-downward-api-xfkcn deletion completed in 6.119061199s • [SLOW TEST:10.287 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:25:32.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 21 12:25:32.378: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 12:25:32.403: INFO: Waiting for terminating namespaces to be deleted... Mar 21 12:25:32.406: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 21 12:25:32.413: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.413: INFO: Container coredns ready: true, restart count 0 Mar 21 12:25:32.414: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.414: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 12:25:32.414: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.414: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 12:25:32.414: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 21 12:25:32.419: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.419: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 12:25:32.419: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.419: INFO: Container coredns ready: true, restart count 0 Mar 21 12:25:32.419: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 21 12:25:32.419: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fe515236e3fa8f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:25:33.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-mlffh" for this suite. Mar 21 12:25:39.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:25:39.537: INFO: namespace: e2e-tests-sched-pred-mlffh, resource: bindings, ignored listing per whitelist Mar 21 12:25:39.570: INFO: namespace e2e-tests-sched-pred-mlffh deletion completed in 6.131014302s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.286 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:25:39.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 21 12:25:39.724: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-flhxc,SelfLink:/api/v1/namespaces/e2e-tests-watch-flhxc/configmaps/e2e-watch-test-resource-version,UID:133d97a4-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1026503,Generation:0,CreationTimestamp:2020-03-21 12:25:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 21 12:25:39.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-flhxc,SelfLink:/api/v1/namespaces/e2e-tests-watch-flhxc/configmaps/e2e-watch-test-resource-version,UID:133d97a4-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1026504,Generation:0,CreationTimestamp:2020-03-21 12:25:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:25:39.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-flhxc" for this suite. Mar 21 12:25:45.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:25:45.809: INFO: namespace: e2e-tests-watch-flhxc, resource: bindings, ignored listing per whitelist Mar 21 12:25:45.817: INFO: namespace e2e-tests-watch-flhxc deletion completed in 6.086383719s • [SLOW TEST:6.246 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:25:45.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 21 12:25:45.965: INFO: Waiting up to 5m0s for pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-js5db" to be "success or failure" Mar 21 12:25:45.990: INFO: Pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.882827ms Mar 21 12:25:47.994: INFO: Pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028691598s Mar 21 12:25:49.998: INFO: Pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.032768739s Mar 21 12:25:52.001: INFO: Pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036082652s STEP: Saw pod success Mar 21 12:25:52.001: INFO: Pod "pod-16ff4f43-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:25:52.003: INFO: Trying to get logs from node hunter-worker pod pod-16ff4f43-6b6f-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:25:52.047: INFO: Waiting for pod pod-16ff4f43-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:25:52.068: INFO: Pod pod-16ff4f43-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:25:52.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-js5db" for this suite. Mar 21 12:25:58.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:25:58.125: INFO: namespace: e2e-tests-emptydir-js5db, resource: bindings, ignored listing per whitelist Mar 21 12:25:58.166: INFO: namespace e2e-tests-emptydir-js5db deletion completed in 6.094848939s • [SLOW TEST:12.349 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:25:58.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 21 12:25:58.297: INFO: Waiting up to 5m0s for pod "pod-1e590326-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-spn7w" to be "success or failure" Mar 21 12:25:58.302: INFO: Pod "pod-1e590326-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364522ms Mar 21 12:26:00.306: INFO: Pod "pod-1e590326-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008489253s Mar 21 12:26:02.310: INFO: Pod "pod-1e590326-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012949444s STEP: Saw pod success Mar 21 12:26:02.310: INFO: Pod "pod-1e590326-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:26:02.315: INFO: Trying to get logs from node hunter-worker pod pod-1e590326-6b6f-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:26:02.330: INFO: Waiting for pod pod-1e590326-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:26:02.334: INFO: Pod pod-1e590326-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:26:02.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-spn7w" for this suite. Mar 21 12:26:08.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:26:08.380: INFO: namespace: e2e-tests-emptydir-spn7w, resource: bindings, ignored listing per whitelist Mar 21 12:26:08.418: INFO: namespace e2e-tests-emptydir-spn7w deletion completed in 6.080897558s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:26:08.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-247b369e-6b6f-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:26:12.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hjb6b" for this suite. Mar 21 12:26:34.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:26:34.706: INFO: namespace: e2e-tests-configmap-hjb6b, resource: bindings, ignored listing per whitelist Mar 21 12:26:34.745: INFO: namespace e2e-tests-configmap-hjb6b deletion completed in 22.109811794s • [SLOW TEST:26.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:26:34.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3420d2f8-6b6f-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 12:26:34.843: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-kllwt" to be "success or failure" Mar 21 12:26:34.847: INFO: Pod "pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.733132ms Mar 21 12:26:36.888: INFO: Pod "pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044857476s Mar 21 12:26:38.892: INFO: Pod "pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048947721s STEP: Saw pod success Mar 21 12:26:38.892: INFO: Pod "pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:26:38.895: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 21 12:26:38.962: INFO: Waiting for pod pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:26:38.970: INFO: Pod pod-projected-secrets-34214428-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:26:38.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kllwt" for this suite. Mar 21 12:26:44.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:26:45.000: INFO: namespace: e2e-tests-projected-kllwt, resource: bindings, ignored listing per whitelist Mar 21 12:26:45.067: INFO: namespace e2e-tests-projected-kllwt deletion completed in 6.094107858s • [SLOW TEST:10.322 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:26:45.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 21 12:26:45.164: INFO: Waiting up to 5m0s for pod "downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-7znct" to be "success or failure" Mar 21 12:26:45.168: INFO: Pod "downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.54201ms Mar 21 12:26:47.201: INFO: Pod "downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03678093s Mar 21 12:26:49.242: INFO: Pod "downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078179427s STEP: Saw pod success Mar 21 12:26:49.243: INFO: Pod "downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:26:49.246: INFO: Trying to get logs from node hunter-worker pod downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 12:26:49.262: INFO: Waiting for pod downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:26:49.266: INFO: Pod downward-api-3a4753d0-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:26:49.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7znct" for this suite. Mar 21 12:26:55.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:26:55.330: INFO: namespace: e2e-tests-downward-api-7znct, resource: bindings, ignored listing per whitelist Mar 21 12:26:55.361: INFO: namespace e2e-tests-downward-api-7znct deletion completed in 6.091869559s • [SLOW TEST:10.293 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:26:55.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-406e5d57-6b6f-11ea-946c-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-406e5db7-6b6f-11ea-946c-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-406e5d57-6b6f-11ea-946c-0242ac11000f STEP: Updating configmap cm-test-opt-upd-406e5db7-6b6f-11ea-946c-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-406e5de0-6b6f-11ea-946c-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:28:05.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xfrj4" for this suite. Mar 21 12:28:27.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:28:27.988: INFO: namespace: e2e-tests-projected-xfrj4, resource: bindings, ignored listing per whitelist Mar 21 12:28:28.008: INFO: namespace e2e-tests-projected-xfrj4 deletion completed in 22.103694813s • [SLOW TEST:92.647 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:28:28.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:28:28.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-bs88g" to be "success or failure" Mar 21 12:28:28.138: INFO: Pod "downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555142ms Mar 21 12:28:30.144: INFO: Pod "downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012623121s Mar 21 12:28:32.148: INFO: Pod "downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017279554s STEP: Saw pod success Mar 21 12:28:32.148: INFO: Pod "downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:28:32.152: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:28:32.197: INFO: Waiting for pod downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:28:32.210: INFO: Pod downwardapi-volume-77a4fc1e-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:28:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bs88g" for this suite. Mar 21 12:28:38.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:28:38.326: INFO: namespace: e2e-tests-downward-api-bs88g, resource: bindings, ignored listing per whitelist Mar 21 12:28:38.350: INFO: namespace e2e-tests-downward-api-bs88g deletion completed in 6.136981999s • [SLOW TEST:10.341 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:28:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:28:38.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-downward-api-fkc54" to be "success or failure" Mar 21 12:28:38.502: INFO: Pod "downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.637462ms Mar 21 12:28:40.543: INFO: Pod "downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066166692s Mar 21 12:28:42.548: INFO: Pod "downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070486987s STEP: Saw pod success Mar 21 12:28:42.548: INFO: Pod "downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:28:42.551: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:28:42.587: INFO: Waiting for pod downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:28:42.601: INFO: Pod downwardapi-volume-7dd2914e-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:28:42.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fkc54" for this suite. Mar 21 12:28:48.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:28:48.664: INFO: namespace: e2e-tests-downward-api-fkc54, resource: bindings, ignored listing per whitelist Mar 21 12:28:48.741: INFO: namespace e2e-tests-downward-api-fkc54 deletion completed in 6.13599145s • [SLOW TEST:10.391 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:28:48.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 21 12:28:50.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw56n' Mar 21 12:28:52.092: INFO: stderr: "" Mar 21 12:28:52.092: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 21 12:28:57.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw56n -o json' Mar 21 12:28:57.248: INFO: stderr: "" Mar 21 12:28:57.248: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-21T12:28:52Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-mw56n\",\n \"resourceVersion\": \"1027115\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mw56n/pods/e2e-test-nginx-pod\",\n \"uid\": \"85eeacef-6b6f-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nxsrd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nxsrd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nxsrd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T12:28:52Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T12:28:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T12:28:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-21T12:28:52Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e3b72be5150542f766d5a96047efa7625984a3f7392f433701efc297e054eac0\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-21T12:28:54Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.185\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-21T12:28:52Z\"\n }\n}\n" STEP: replace the image in the pod Mar 21 12:28:57.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mw56n' Mar 21 12:28:57.515: INFO: stderr: "" Mar 21 12:28:57.515: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 21 12:28:57.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mw56n' Mar 21 12:29:11.694: INFO: stderr: "" Mar 21 12:29:11.694: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:29:11.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mw56n" for this suite. Mar 21 12:29:17.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:29:17.831: INFO: namespace: e2e-tests-kubectl-mw56n, resource: bindings, ignored listing per whitelist Mar 21 12:29:17.836: INFO: namespace e2e-tests-kubectl-mw56n deletion completed in 6.117144359s • [SLOW TEST:29.094 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:29:17.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-7gn8 STEP: Creating a pod to test atomic-volume-subpath Mar 21 12:29:17.975: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7gn8" in namespace "e2e-tests-subpath-6tk84" to be "success or failure" Mar 21 12:29:17.994: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.959397ms Mar 21 12:29:19.999: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023482343s Mar 21 12:29:22.003: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027709365s Mar 21 12:29:24.008: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 6.032421343s Mar 21 12:29:26.012: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 8.036764502s Mar 21 12:29:28.017: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 10.041391179s Mar 21 12:29:30.022: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 12.046095312s Mar 21 12:29:32.026: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 14.050877674s Mar 21 12:29:34.031: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 16.055348657s Mar 21 12:29:36.035: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 18.059888178s Mar 21 12:29:38.040: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 20.064560823s Mar 21 12:29:40.044: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 22.069007883s Mar 21 12:29:42.049: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Running", Reason="", readiness=false. Elapsed: 24.073634991s Mar 21 12:29:44.054: INFO: Pod "pod-subpath-test-configmap-7gn8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.078143491s STEP: Saw pod success Mar 21 12:29:44.054: INFO: Pod "pod-subpath-test-configmap-7gn8" satisfied condition "success or failure" Mar 21 12:29:44.057: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-7gn8 container test-container-subpath-configmap-7gn8: STEP: delete the pod Mar 21 12:29:44.097: INFO: Waiting for pod pod-subpath-test-configmap-7gn8 to disappear Mar 21 12:29:44.103: INFO: Pod pod-subpath-test-configmap-7gn8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7gn8 Mar 21 12:29:44.103: INFO: Deleting pod "pod-subpath-test-configmap-7gn8" in namespace "e2e-tests-subpath-6tk84" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:29:44.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6tk84" for this suite. Mar 21 12:29:50.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:29:50.155: INFO: namespace: e2e-tests-subpath-6tk84, resource: bindings, ignored listing per whitelist Mar 21 12:29:50.221: INFO: namespace e2e-tests-subpath-6tk84 deletion completed in 6.105250826s • [SLOW TEST:32.385 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:29:50.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 21 12:29:50.942: INFO: created pod pod-service-account-defaultsa Mar 21 12:29:50.942: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 21 12:29:50.950: INFO: created pod pod-service-account-mountsa Mar 21 12:29:50.950: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 21 12:29:50.977: INFO: created pod pod-service-account-nomountsa Mar 21 12:29:50.978: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 21 12:29:50.992: INFO: created pod pod-service-account-defaultsa-mountspec Mar 21 12:29:50.992: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 21 12:29:51.059: INFO: created pod pod-service-account-mountsa-mountspec Mar 21 12:29:51.059: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 21 12:29:51.074: INFO: created pod pod-service-account-nomountsa-mountspec Mar 21 12:29:51.074: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 21 12:29:51.094: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 21 12:29:51.094: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 21 12:29:51.157: INFO: created pod pod-service-account-mountsa-nomountspec Mar 21 12:29:51.157: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 21 12:29:51.233: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 21 12:29:51.233: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:29:51.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-6wvsv" for this suite. Mar 21 12:30:19.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:30:19.463: INFO: namespace: e2e-tests-svcaccounts-6wvsv, resource: bindings, ignored listing per whitelist Mar 21 12:30:19.484: INFO: namespace e2e-tests-svcaccounts-6wvsv deletion completed in 28.112547388s • [SLOW TEST:29.263 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:30:19.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 21 12:30:19.597: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 21 12:30:24.602: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 12:30:24.602: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 21 12:30:26.606: INFO: Creating deployment "test-rollover-deployment" Mar 21 12:30:26.614: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 21 12:30:28.621: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 21 12:30:28.628: INFO: Ensure that both replica sets have 1 created replica Mar 21 12:30:28.634: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 21 12:30:28.640: INFO: Updating deployment test-rollover-deployment Mar 21 12:30:28.640: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 21 12:30:30.659: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 21 12:30:30.665: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 21 12:30:30.671: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:30.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390628, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:32.679: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:32.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390632, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:34.678: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:34.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390632, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:36.679: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:36.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390632, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:38.678: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:38.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390632, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:40.679: INFO: all replica sets need to contain the pod-template-hash label Mar 21 12:30:40.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390632, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390626, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 12:30:42.679: INFO: Mar 21 12:30:42.679: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 21 12:30:42.686: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-nm7fb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nm7fb/deployments/test-rollover-deployment,UID:be46ecea-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1027552,Generation:2,CreationTimestamp:2020-03-21 12:30:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-21 12:30:26 +0000 UTC 2020-03-21 12:30:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-21 12:30:42 +0000 UTC 2020-03-21 12:30:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 21 12:30:42.689: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-nm7fb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nm7fb/replicasets/test-rollover-deployment-5b8479fdb6,UID:bf7d5ff9-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1027543,Generation:2,CreationTimestamp:2020-03-21 12:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment be46ecea-6b6f-11ea-99e8-0242ac110002 0xc0021b1737 0xc0021b1738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 21 12:30:42.689: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 21 12:30:42.689: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-nm7fb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nm7fb/replicasets/test-rollover-controller,UID:ba160d51-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1027551,Generation:2,CreationTimestamp:2020-03-21 12:30:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment be46ecea-6b6f-11ea-99e8-0242ac110002 0xc0021b1597 0xc0021b1598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 12:30:42.690: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-nm7fb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nm7fb/replicasets/test-rollover-deployment-58494b7559,UID:be49588d-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1027510,Generation:2,CreationTimestamp:2020-03-21 12:30:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment be46ecea-6b6f-11ea-99e8-0242ac110002 0xc0021b1667 0xc0021b1668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 21 12:30:42.693: INFO: Pod "test-rollover-deployment-5b8479fdb6-bdpth" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-bdpth,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-nm7fb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nm7fb/pods/test-rollover-deployment-5b8479fdb6-bdpth,UID:bf87c486-6b6f-11ea-99e8-0242ac110002,ResourceVersion:1027521,Generation:0,CreationTimestamp:2020-03-21 12:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 bf7d5ff9-6b6f-11ea-99e8-0242ac110002 0xc00226a207 0xc00226a208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ttvxq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ttvxq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ttvxq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226a280} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226a2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:30:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:30:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-21 12:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.60,StartTime:2020-03-21 12:30:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-21 12:30:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://18e31a47d70bea0f6a64653569b0c28d7a4eb74fd57612c8a286af5a619254d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:30:42.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nm7fb" for this suite. Mar 21 12:30:48.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:30:48.755: INFO: namespace: e2e-tests-deployment-nm7fb, resource: bindings, ignored listing per whitelist Mar 21 12:30:48.785: INFO: namespace e2e-tests-deployment-nm7fb deletion completed in 6.089487573s • [SLOW TEST:29.301 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:30:48.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 21 12:30:48.919: INFO: Waiting up to 5m0s for pod "pod-cb916201-6b6f-11ea-946c-0242ac11000f" in namespace "e2e-tests-emptydir-rvkzq" to be "success or failure" Mar 21 12:30:48.925: INFO: Pod "pod-cb916201-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.888793ms Mar 21 12:30:50.932: INFO: Pod "pod-cb916201-6b6f-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012568751s Mar 21 12:30:52.935: INFO: Pod "pod-cb916201-6b6f-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015697327s STEP: Saw pod success Mar 21 12:30:52.935: INFO: Pod "pod-cb916201-6b6f-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:30:52.938: INFO: Trying to get logs from node hunter-worker pod pod-cb916201-6b6f-11ea-946c-0242ac11000f container test-container: STEP: delete the pod Mar 21 12:30:52.987: INFO: Waiting for pod pod-cb916201-6b6f-11ea-946c-0242ac11000f to disappear Mar 21 12:30:52.997: INFO: Pod pod-cb916201-6b6f-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:30:52.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rvkzq" for this suite. Mar 21 12:30:59.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:30:59.105: INFO: namespace: e2e-tests-emptydir-rvkzq, resource: bindings, ignored listing per whitelist Mar 21 12:30:59.150: INFO: namespace e2e-tests-emptydir-rvkzq deletion completed in 6.149790867s • [SLOW TEST:10.364 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:30:59.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-ftg9t [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-ftg9t STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-ftg9t STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-ftg9t STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-ftg9t Mar 21 12:31:03.342: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ftg9t, name: ss-0, uid: d32f06d1-6b6f-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Mar 21 12:31:11.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ftg9t, name: ss-0, uid: d32f06d1-6b6f-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 21 12:31:11.268: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ftg9t, name: ss-0, uid: d32f06d1-6b6f-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 21 12:31:11.297: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-ftg9t STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-ftg9t STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-ftg9t and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 21 12:31:25.448: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ftg9t Mar 21 12:31:25.451: INFO: Scaling statefulset ss to 0 Mar 21 12:31:35.469: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 12:31:35.472: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:31:35.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-ftg9t" for this suite. Mar 21 12:31:41.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:31:41.560: INFO: namespace: e2e-tests-statefulset-ftg9t, resource: bindings, ignored listing per whitelist Mar 21 12:31:41.610: INFO: namespace e2e-tests-statefulset-ftg9t deletion completed in 6.119191898s • [SLOW TEST:42.460 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:31:41.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:31:45.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zk564" for this suite. Mar 21 12:32:35.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:32:35.795: INFO: namespace: e2e-tests-kubelet-test-zk564, resource: bindings, ignored listing per whitelist Mar 21 12:32:35.843: INFO: namespace e2e-tests-kubelet-test-zk564 deletion completed in 50.084646571s • [SLOW TEST:54.234 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:32:35.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 21 12:32:43.020: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:32:44.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-tgkzt" for this suite. Mar 21 12:33:06.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:33:06.192: INFO: namespace: e2e-tests-replicaset-tgkzt, resource: bindings, ignored listing per whitelist Mar 21 12:33:06.194: INFO: namespace e2e-tests-replicaset-tgkzt deletion completed in 22.12545574s • [SLOW TEST:30.351 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:33:06.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 21 12:33:06.303: INFO: Waiting up to 5m0s for pod "var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f" in namespace "e2e-tests-var-expansion-nqfsm" to be "success or failure" Mar 21 12:33:06.326: INFO: Pod "var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.559958ms Mar 21 12:33:08.330: INFO: Pod "var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026789097s Mar 21 12:33:10.334: INFO: Pod "var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031416815s STEP: Saw pod success Mar 21 12:33:10.335: INFO: Pod "var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:33:10.338: INFO: Trying to get logs from node hunter-worker pod var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f container dapi-container: STEP: delete the pod Mar 21 12:33:10.356: INFO: Waiting for pod var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f to disappear Mar 21 12:33:10.360: INFO: Pod var-expansion-1d740a9f-6b70-11ea-946c-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:33:10.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-nqfsm" for this suite. Mar 21 12:33:16.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:33:16.416: INFO: namespace: e2e-tests-var-expansion-nqfsm, resource: bindings, ignored listing per whitelist Mar 21 12:33:16.473: INFO: namespace e2e-tests-var-expansion-nqfsm deletion completed in 6.110123129s • [SLOW TEST:10.279 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:33:16.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-239571b7-6b70-11ea-946c-0242ac11000f STEP: Creating a pod to test consume secrets Mar 21 12:33:16.710: INFO: Waiting up to 5m0s for pod "pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f" in namespace "e2e-tests-secrets-xnrks" to be "success or failure" Mar 21 12:33:16.724: INFO: Pod "pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.368938ms Mar 21 12:33:18.763: INFO: Pod "pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05301885s Mar 21 12:33:20.766: INFO: Pod "pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056835239s STEP: Saw pod success Mar 21 12:33:20.766: INFO: Pod "pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:33:20.769: INFO: Trying to get logs from node hunter-worker pod pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 21 12:33:20.787: INFO: Waiting for pod pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f to disappear Mar 21 12:33:20.840: INFO: Pod pod-secrets-23a8f020-6b70-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:33:20.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xnrks" for this suite. Mar 21 12:33:26.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:33:26.878: INFO: namespace: e2e-tests-secrets-xnrks, resource: bindings, ignored listing per whitelist Mar 21 12:33:26.938: INFO: namespace e2e-tests-secrets-xnrks deletion completed in 6.094363441s STEP: Destroying namespace "e2e-tests-secret-namespace-48z4d" for this suite. Mar 21 12:33:32.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:33:33.025: INFO: namespace: e2e-tests-secret-namespace-48z4d, resource: bindings, ignored listing per whitelist Mar 21 12:33:33.035: INFO: namespace e2e-tests-secret-namespace-48z4d deletion completed in 6.097740602s • [SLOW TEST:16.562 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:33:33.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0321 12:33:43.162583 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 21 12:33:43.162: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:33:43.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-x9kgt" for this suite. Mar 21 12:33:49.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:33:49.218: INFO: namespace: e2e-tests-gc-x9kgt, resource: bindings, ignored listing per whitelist Mar 21 12:33:49.298: INFO: namespace e2e-tests-gc-x9kgt deletion completed in 6.132273956s • [SLOW TEST:16.263 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:33:49.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 21 12:33:53.459: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-37267c8c-6b70-11ea-946c-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-8g82d", SelfLink:"/api/v1/namespaces/e2e-tests-pods-8g82d/pods/pod-submit-remove-37267c8c-6b70-11ea-946c-0242ac11000f", UID:"3727cd55-6b70-11ea-99e8-0242ac110002", ResourceVersion:"1028285", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720390829, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"399988016"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vkbxd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bdca00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vkbxd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00247f418), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a69f80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00247f460)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00247f480)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00247f488), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00247f48c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390829, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390832, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390832, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720390829, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.199", StartTime:(*v1.Time)(0xc001bde200), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001bde220), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://30e6c28c143b624c29630adb2d09cb2a52baa3a5a1c707a479b498c92307dfee"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:34:01.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8g82d" for this suite. Mar 21 12:34:07.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:34:07.327: INFO: namespace: e2e-tests-pods-8g82d, resource: bindings, ignored listing per whitelist Mar 21 12:34:07.397: INFO: namespace e2e-tests-pods-8g82d deletion completed in 6.091340023s • [SLOW TEST:18.098 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:34:07.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-41f2845d-6b70-11ea-946c-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 21 12:34:07.525: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-cxb7z" to be "success or failure" Mar 21 12:34:07.543: INFO: Pod "pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.916249ms Mar 21 12:34:10.199: INFO: Pod "pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.674089138s Mar 21 12:34:12.204: INFO: Pod "pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.678453861s STEP: Saw pod success Mar 21 12:34:12.204: INFO: Pod "pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:34:12.206: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 21 12:34:12.267: INFO: Waiting for pod pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f to disappear Mar 21 12:34:12.272: INFO: Pod pod-projected-configmaps-41f31d95-6b70-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:34:12.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cxb7z" for this suite. Mar 21 12:34:18.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:34:18.389: INFO: namespace: e2e-tests-projected-cxb7z, resource: bindings, ignored listing per whitelist Mar 21 12:34:18.393: INFO: namespace e2e-tests-projected-cxb7z deletion completed in 6.118597264s • [SLOW TEST:10.996 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:34:18.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 21 12:34:18.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f" in namespace "e2e-tests-projected-vblmn" to be "success or failure" Mar 21 12:34:18.494: INFO: Pod "downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521416ms Mar 21 12:34:20.498: INFO: Pod "downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007969888s Mar 21 12:34:22.503: INFO: Pod "downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012233677s STEP: Saw pod success Mar 21 12:34:22.503: INFO: Pod "downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f" satisfied condition "success or failure" Mar 21 12:34:22.506: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f container client-container: STEP: delete the pod Mar 21 12:34:22.525: INFO: Waiting for pod downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f to disappear Mar 21 12:34:22.551: INFO: Pod downwardapi-volume-487b3980-6b70-11ea-946c-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:34:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vblmn" for this suite. Mar 21 12:34:28.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:34:28.624: INFO: namespace: e2e-tests-projected-vblmn, resource: bindings, ignored listing per whitelist Mar 21 12:34:28.666: INFO: namespace e2e-tests-projected-vblmn deletion completed in 6.111485307s • [SLOW TEST:10.272 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 21 12:34:28.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6qwmp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6qwmp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.146.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.146.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.146.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.146.227_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6qwmp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6qwmp.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6qwmp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6qwmp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.146.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.146.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.146.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.146.227_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 12:34:34.848: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.851: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.854: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.866: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.868: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.889: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.891: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.894: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.896: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.899: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.903: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.905: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.908: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:34.924: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:34:39.930: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.938: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.972: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.974: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.977: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.980: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.983: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.987: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:39.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:40.012: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:34:44.929: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.933: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.936: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.950: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.970: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.973: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.975: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.977: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.980: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.982: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:44.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:45.005: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:34:49.929: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.933: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.936: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.949: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.976: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.978: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.980: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.983: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.986: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.989: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.992: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:49.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:50.014: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:34:54.930: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.938: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.954: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.974: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.977: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.980: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.982: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.985: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.987: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:54.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:55.012: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:34:59.930: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.937: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.975: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.978: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.981: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.984: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.988: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.991: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:34:59.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc from pod e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f: the server could not find the requested resource (get pods dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f) Mar 21 12:35:00.019: INFO: Lookups using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6qwmp wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6qwmp jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp jessie_udp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@dns-test-service.e2e-tests-dns-6qwmp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6qwmp.svc] Mar 21 12:35:05.020: INFO: DNS probes using e2e-tests-dns-6qwmp/dns-test-4ea2669c-6b70-11ea-946c-0242ac11000f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 21 12:35:05.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6qwmp" for this suite. Mar 21 12:35:11.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 21 12:35:11.559: INFO: namespace: e2e-tests-dns-6qwmp, resource: bindings, ignored listing per whitelist Mar 21 12:35:11.602: INFO: namespace e2e-tests-dns-6qwmp deletion completed in 6.298154128s • [SLOW TEST:42.935 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSMar 21 12:35:11.602: INFO: Running AfterSuite actions on all nodes Mar 21 12:35:11.602: INFO: Running AfterSuite actions on node 1 Mar 21 12:35:11.602: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6507.829 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS