I0311 10:46:29.445099 6 e2e.go:224] Starting e2e run "90423c28-6385-11ea-bacb-0242ac11000a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583923588 - Will randomize all specs Will run 201 of 2164 specs Mar 11 10:46:29.595: INFO: >>> kubeConfig: /root/.kube/config Mar 11 10:46:29.597: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 11 10:46:29.610: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 11 10:46:29.627: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 11 10:46:29.627: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 11 10:46:29.627: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 11 10:46:29.634: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 11 10:46:29.634: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 11 10:46:29.634: INFO: e2e test version: v1.13.12 Mar 11 10:46:29.634: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:46:29.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Mar 11 10:46:29.715: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-90ab51a9-6385-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 10:46:29.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-lsfp9" to be "success or failure" Mar 11 10:46:29.745: INFO: Pod "pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.410132ms Mar 11 10:46:31.749: INFO: Pod "pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011024379s STEP: Saw pod success Mar 11 10:46:31.749: INFO: Pod "pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:46:31.751: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 10:46:31.809: INFO: Waiting for pod pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a to disappear Mar 11 10:46:31.812: INFO: Pod pod-configmaps-90ab96d7-6385-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:46:31.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lsfp9" for this suite. Mar 11 10:46:37.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:46:37.906: INFO: namespace: e2e-tests-configmap-lsfp9, resource: bindings, ignored listing per whitelist Mar 11 10:46:37.946: INFO: namespace e2e-tests-configmap-lsfp9 deletion completed in 6.130886367s • [SLOW TEST:8.311 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:46:37.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-959cd180-6385-11ea-bacb-0242ac11000a STEP: Creating secret with name s-test-opt-upd-959cd1db-6385-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-959cd180-6385-11ea-bacb-0242ac11000a STEP: Updating secret s-test-opt-upd-959cd1db-6385-11ea-bacb-0242ac11000a STEP: Creating secret with name s-test-opt-create-959cd238-6385-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:48:06.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-92xc7" for this suite. Mar 11 10:48:28.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:48:28.500: INFO: namespace: e2e-tests-secrets-92xc7, resource: bindings, ignored listing per whitelist Mar 11 10:48:28.551: INFO: namespace e2e-tests-secrets-92xc7 deletion completed in 22.073194421s • [SLOW TEST:110.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:48:28.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 11 10:48:30.700: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:48:54.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2xqnx" for this suite. Mar 11 10:49:00.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:00.835: INFO: namespace: e2e-tests-namespaces-2xqnx, resource: bindings, ignored listing per whitelist Mar 11 10:49:00.865: INFO: namespace e2e-tests-namespaces-2xqnx deletion completed in 6.079006672s STEP: Destroying namespace "e2e-tests-nsdeletetest-d95lz" for this suite. Mar 11 10:49:00.866: INFO: Namespace e2e-tests-nsdeletetest-d95lz was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2ncwj" for this suite. Mar 11 10:49:06.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:06.946: INFO: namespace: e2e-tests-nsdeletetest-2ncwj, resource: bindings, ignored listing per whitelist Mar 11 10:49:06.962: INFO: namespace e2e-tests-nsdeletetest-2ncwj deletion completed in 6.095919145s • [SLOW TEST:38.411 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:49:06.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0311 10:49:17.087795 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 10:49:17.087: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:49:17.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8f872" for this suite. Mar 11 10:49:23.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:23.144: INFO: namespace: e2e-tests-gc-8f872, resource: bindings, ignored listing per whitelist Mar 11 10:49:23.220: INFO: namespace e2e-tests-gc-8f872 deletion completed in 6.129009464s • [SLOW TEST:16.258 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:49:23.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 10:49:23.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-9scm5" to be "success or failure" Mar 11 10:49:23.312: INFO: Pod "downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.873445ms Mar 11 10:49:25.316: INFO: Pod "downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008105398s STEP: Saw pod success Mar 11 10:49:25.316: INFO: Pod "downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:49:25.318: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 10:49:25.331: INFO: Waiting for pod downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a to disappear Mar 11 10:49:25.336: INFO: Pod downwardapi-volume-f81ddb6b-6385-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:49:25.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9scm5" for this suite. Mar 11 10:49:31.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:31.380: INFO: namespace: e2e-tests-downward-api-9scm5, resource: bindings, ignored listing per whitelist Mar 11 10:49:31.421: INFO: namespace e2e-tests-downward-api-9scm5 deletion completed in 6.082472599s • [SLOW TEST:8.201 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:49:31.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:49:31.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-kqbxd" for this suite. Mar 11 10:49:37.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:37.576: INFO: namespace: e2e-tests-services-kqbxd, resource: bindings, ignored listing per whitelist Mar 11 10:49:37.596: INFO: namespace e2e-tests-services-kqbxd deletion completed in 6.09233531s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.175 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:49:37.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 10:49:37.762: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"00ba7741-6386-11ea-9978-0242ac11000d", Controller:(*bool)(0xc0019ed1ca), BlockOwnerDeletion:(*bool)(0xc0019ed1cb)}} Mar 11 10:49:37.772: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"00b63a6a-6386-11ea-9978-0242ac11000d", Controller:(*bool)(0xc00090e19a), BlockOwnerDeletion:(*bool)(0xc00090e19b)}} Mar 11 10:49:37.808: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"00b6adf4-6386-11ea-9978-0242ac11000d", Controller:(*bool)(0xc000d479d2), BlockOwnerDeletion:(*bool)(0xc000d479d3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:49:42.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4sx57" for this suite. Mar 11 10:49:48.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:49:48.899: INFO: namespace: e2e-tests-gc-4sx57, resource: bindings, ignored listing per whitelist Mar 11 10:49:48.948: INFO: namespace e2e-tests-gc-4sx57 deletion completed in 6.0934727s • [SLOW TEST:11.351 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:49:48.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 11 10:49:53.339: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:49:53.346: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:49:55.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:49:55.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:49:57.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:49:57.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:49:59.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:49:59.350: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:01.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:01.366: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:03.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:03.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:05.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:05.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:07.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:07.350: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:09.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:09.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:11.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:11.349: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 10:50:13.346: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 10:50:13.348: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:50:13.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dv256" for this suite. Mar 11 10:50:35.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:50:35.428: INFO: namespace: e2e-tests-container-lifecycle-hook-dv256, resource: bindings, ignored listing per whitelist Mar 11 10:50:35.461: INFO: namespace e2e-tests-container-lifecycle-hook-dv256 deletion completed in 22.102830056s • [SLOW TEST:46.512 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:50:35.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 11 10:50:35.548: INFO: Waiting up to 5m0s for pod "downward-api-2331644d-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-5rvgh" to be "success or failure" Mar 11 10:50:35.562: INFO: Pod "downward-api-2331644d-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.126639ms Mar 11 10:50:37.566: INFO: Pod "downward-api-2331644d-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01794255s STEP: Saw pod success Mar 11 10:50:37.566: INFO: Pod "downward-api-2331644d-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:50:37.568: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2331644d-6386-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 10:50:37.598: INFO: Waiting for pod downward-api-2331644d-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:50:37.623: INFO: Pod downward-api-2331644d-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:50:37.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5rvgh" for this suite. Mar 11 10:50:43.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:50:43.661: INFO: namespace: e2e-tests-downward-api-5rvgh, resource: bindings, ignored listing per whitelist Mar 11 10:50:43.712: INFO: namespace e2e-tests-downward-api-5rvgh deletion completed in 6.08605087s • [SLOW TEST:8.251 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:50:43.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 11 10:50:43.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498240,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 10:50:43.816: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498241,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 11 10:50:43.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498242,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 11 10:50:53.875: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498263,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 10:50:53.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498264,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 11 10:50:53.876: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-tbnhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-tbnhz/configmaps/e2e-watch-test-label-changed,UID:281d8e47-6386-11ea-9978-0242ac11000d,ResourceVersion:498265,Generation:0,CreationTimestamp:2020-03-11 10:50:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:50:53.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-tbnhz" for this suite. Mar 11 10:50:59.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:50:59.925: INFO: namespace: e2e-tests-watch-tbnhz, resource: bindings, ignored listing per whitelist Mar 11 10:50:59.969: INFO: namespace e2e-tests-watch-tbnhz deletion completed in 6.089632288s • [SLOW TEST:16.256 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:50:59.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-31cc2262-6386-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 10:51:00.059: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-mfbjj" to be "success or failure" Mar 11 10:51:00.098: INFO: Pod "pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.747275ms Mar 11 10:51:02.102: INFO: Pod "pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.042772884s Mar 11 10:51:04.106: INFO: Pod "pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047020549s STEP: Saw pod success Mar 11 10:51:04.106: INFO: Pod "pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:51:04.109: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Mar 11 10:51:04.136: INFO: Waiting for pod pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:51:04.141: INFO: Pod pod-projected-secrets-31cd347e-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:04.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mfbjj" for this suite. Mar 11 10:51:10.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:10.174: INFO: namespace: e2e-tests-projected-mfbjj, resource: bindings, ignored listing per whitelist Mar 11 10:51:10.233: INFO: namespace e2e-tests-projected-mfbjj deletion completed in 6.088002095s • [SLOW TEST:10.264 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:10.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 10:51:10.338: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.72376ms) Mar 11 10:51:10.340: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.900036ms) Mar 11 10:51:10.342: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.061275ms) Mar 11 10:51:10.344: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.905134ms) Mar 11 10:51:10.346: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.141603ms) Mar 11 10:51:10.349: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.652861ms) Mar 11 10:51:10.351: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.133936ms) Mar 11 10:51:10.353: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.742489ms) Mar 11 10:51:10.355: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.914571ms) Mar 11 10:51:10.365: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 9.969725ms) Mar 11 10:51:10.371: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.145246ms) Mar 11 10:51:10.373: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.097168ms) Mar 11 10:51:10.375: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.666847ms) Mar 11 10:51:10.376: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.623347ms) Mar 11 10:51:10.378: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.545346ms) Mar 11 10:51:10.379: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.580987ms) Mar 11 10:51:10.381: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.621336ms) Mar 11 10:51:10.383: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.797124ms) Mar 11 10:51:10.385: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.053896ms) Mar 11 10:51:10.387: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.735152ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:10.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-7tkcf" for this suite. Mar 11 10:51:16.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:16.459: INFO: namespace: e2e-tests-proxy-7tkcf, resource: bindings, ignored listing per whitelist Mar 11 10:51:16.470: INFO: namespace e2e-tests-proxy-7tkcf deletion completed in 6.08133994s • [SLOW TEST:6.237 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:16.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 10:51:16.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-l2kfd' Mar 11 10:51:17.888: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 10:51:17.888: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 11 10:51:21.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-l2kfd' Mar 11 10:51:22.087: INFO: stderr: "" Mar 11 10:51:22.087: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:22.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l2kfd" for this suite. Mar 11 10:51:28.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:28.196: INFO: namespace: e2e-tests-kubectl-l2kfd, resource: bindings, ignored listing per whitelist Mar 11 10:51:28.220: INFO: namespace e2e-tests-kubectl-l2kfd deletion completed in 6.121105614s • [SLOW TEST:11.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:28.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 11 10:51:28.317: INFO: Waiting up to 5m0s for pod "pod-42a533e8-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-ntnkw" to be "success or failure" Mar 11 10:51:28.333: INFO: Pod "pod-42a533e8-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.643201ms Mar 11 10:51:30.336: INFO: Pod "pod-42a533e8-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0197346s Mar 11 10:51:32.340: INFO: Pod "pod-42a533e8-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023107976s STEP: Saw pod success Mar 11 10:51:32.340: INFO: Pod "pod-42a533e8-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:51:32.342: INFO: Trying to get logs from node hunter-worker pod pod-42a533e8-6386-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 10:51:32.365: INFO: Waiting for pod pod-42a533e8-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:51:32.394: INFO: Pod pod-42a533e8-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:32.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ntnkw" for this suite. Mar 11 10:51:38.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:38.481: INFO: namespace: e2e-tests-emptydir-ntnkw, resource: bindings, ignored listing per whitelist Mar 11 10:51:38.518: INFO: namespace e2e-tests-emptydir-ntnkw deletion completed in 6.120853831s • [SLOW TEST:10.298 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:38.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 10:51:38.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-mxjbr" to be "success or failure" Mar 11 10:51:38.593: INFO: Pod "downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312407ms Mar 11 10:51:40.596: INFO: Pod "downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007378102s STEP: Saw pod success Mar 11 10:51:40.596: INFO: Pod "downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:51:40.598: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 10:51:40.625: INFO: Waiting for pod downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:51:40.629: INFO: Pod downwardapi-volume-48c49fd5-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:40.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mxjbr" for this suite. Mar 11 10:51:46.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:46.697: INFO: namespace: e2e-tests-downward-api-mxjbr, resource: bindings, ignored listing per whitelist Mar 11 10:51:46.719: INFO: namespace e2e-tests-downward-api-mxjbr deletion completed in 6.087078571s • [SLOW TEST:8.201 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:46.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 11 10:51:46.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:47.103: INFO: stderr: "" Mar 11 10:51:47.103: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 10:51:47.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:47.220: INFO: stderr: "" Mar 11 10:51:47.220: INFO: stdout: "update-demo-nautilus-f87dl update-demo-nautilus-hrj62 " Mar 11 10:51:47.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f87dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:47.305: INFO: stderr: "" Mar 11 10:51:47.305: INFO: stdout: "" Mar 11 10:51:47.305: INFO: update-demo-nautilus-f87dl is created but not running Mar 11 10:51:52.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.402: INFO: stderr: "" Mar 11 10:51:52.402: INFO: stdout: "update-demo-nautilus-f87dl update-demo-nautilus-hrj62 " Mar 11 10:51:52.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f87dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.481: INFO: stderr: "" Mar 11 10:51:52.481: INFO: stdout: "true" Mar 11 10:51:52.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f87dl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.546: INFO: stderr: "" Mar 11 10:51:52.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 10:51:52.546: INFO: validating pod update-demo-nautilus-f87dl Mar 11 10:51:52.549: INFO: got data: { "image": "nautilus.jpg" } Mar 11 10:51:52.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 10:51:52.549: INFO: update-demo-nautilus-f87dl is verified up and running Mar 11 10:51:52.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrj62 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.646: INFO: stderr: "" Mar 11 10:51:52.646: INFO: stdout: "true" Mar 11 10:51:52.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrj62 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.721: INFO: stderr: "" Mar 11 10:51:52.721: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 10:51:52.721: INFO: validating pod update-demo-nautilus-hrj62 Mar 11 10:51:52.724: INFO: got data: { "image": "nautilus.jpg" } Mar 11 10:51:52.724: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 10:51:52.724: INFO: update-demo-nautilus-hrj62 is verified up and running STEP: using delete to clean up resources Mar 11 10:51:52.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.803: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 10:51:52.803: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 11 10:51:52.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kjghh' Mar 11 10:51:52.886: INFO: stderr: "No resources found.\n" Mar 11 10:51:52.886: INFO: stdout: "" Mar 11 10:51:52.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kjghh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 10:51:52.961: INFO: stderr: "" Mar 11 10:51:52.961: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:52.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kjghh" for this suite. Mar 11 10:51:58.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:51:59.044: INFO: namespace: e2e-tests-kubectl-kjghh, resource: bindings, ignored listing per whitelist Mar 11 10:51:59.070: INFO: namespace e2e-tests-kubectl-kjghh deletion completed in 6.097204336s • [SLOW TEST:12.351 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:51:59.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:51:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5t8q4" for this suite. Mar 11 10:52:05.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:52:05.297: INFO: namespace: e2e-tests-kubelet-test-5t8q4, resource: bindings, ignored listing per whitelist Mar 11 10:52:05.311: INFO: namespace e2e-tests-kubelet-test-5t8q4 deletion completed in 6.095770665s • [SLOW TEST:6.241 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:52:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 11 10:52:05.389: INFO: PodSpec: initContainers in spec.initContainers Mar 11 10:52:47.497: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-58bed7f2-6386-11ea-bacb-0242ac11000a", GenerateName:"", Namespace:"e2e-tests-init-container-wjq2n", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-wjq2n/pods/pod-init-58bed7f2-6386-11ea-bacb-0242ac11000a", UID:"58c11e7c-6386-11ea-9978-0242ac11000d", ResourceVersion:"498714", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719520725, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"389113194"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-t4bb6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002067c00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4bb6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4bb6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-t4bb6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fefe68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010fbb60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001fefef0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001feff10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001feff18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001feff1c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719520725, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719520725, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719520725, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719520725, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.11", PodIP:"10.244.2.248", StartTime:(*v1.Time)(0xc001ae83a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001ae83e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0018bb490)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d01558015f394a4dd5c3b14b5275ebfd6e2a8ac09a87865efafd9aa847dbe6eb"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ae8400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ae83c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:52:47.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wjq2n" for this suite. Mar 11 10:53:09.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:53:09.621: INFO: namespace: e2e-tests-init-container-wjq2n, resource: bindings, ignored listing per whitelist Mar 11 10:53:09.631: INFO: namespace e2e-tests-init-container-wjq2n deletion completed in 22.090869641s • [SLOW TEST:64.319 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:53:09.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-7f186ba9-6386-11ea-bacb-0242ac11000a STEP: Creating configMap with name cm-test-opt-upd-7f186c10-6386-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7f186ba9-6386-11ea-bacb-0242ac11000a STEP: Updating configmap cm-test-opt-upd-7f186c10-6386-11ea-bacb-0242ac11000a STEP: Creating configMap with name cm-test-opt-create-7f186c47-6386-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:54:28.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sdfrq" for this suite. Mar 11 10:54:50.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:54:50.246: INFO: namespace: e2e-tests-configmap-sdfrq, resource: bindings, ignored listing per whitelist Mar 11 10:54:50.257: INFO: namespace e2e-tests-configmap-sdfrq deletion completed in 22.07326962s • [SLOW TEST:100.626 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:54:50.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-bb0d68d9-6386-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 10:54:50.334: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-pxxzf" to be "success or failure" Mar 11 10:54:50.341: INFO: Pod "pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886469ms Mar 11 10:54:52.344: INFO: Pod "pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009967147s Mar 11 10:54:54.347: INFO: Pod "pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013173503s STEP: Saw pod success Mar 11 10:54:54.347: INFO: Pod "pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:54:54.349: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 10:54:54.364: INFO: Waiting for pod pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:54:54.369: INFO: Pod pod-projected-configmaps-bb0e5546-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:54:54.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pxxzf" for this suite. Mar 11 10:55:00.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:55:00.405: INFO: namespace: e2e-tests-projected-pxxzf, resource: bindings, ignored listing per whitelist Mar 11 10:55:00.448: INFO: namespace e2e-tests-projected-pxxzf deletion completed in 6.076186926s • [SLOW TEST:10.190 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:55:00.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c121c0c5-6386-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 10:55:00.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-dfwjj" to be "success or failure" Mar 11 10:55:00.551: INFO: Pod "pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.242367ms Mar 11 10:55:02.555: INFO: Pod "pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022241285s Mar 11 10:55:04.559: INFO: Pod "pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026192057s STEP: Saw pod success Mar 11 10:55:04.559: INFO: Pod "pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:55:04.561: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 10:55:04.581: INFO: Waiting for pod pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a to disappear Mar 11 10:55:04.585: INFO: Pod pod-configmaps-c1227ccf-6386-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:55:04.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dfwjj" for this suite. Mar 11 10:55:10.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:55:10.659: INFO: namespace: e2e-tests-configmap-dfwjj, resource: bindings, ignored listing per whitelist Mar 11 10:55:10.678: INFO: namespace e2e-tests-configmap-dfwjj deletion completed in 6.090047214s • [SLOW TEST:10.230 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:55:10.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 11 10:55:12.759: INFO: Pod pod-hostip-c737b015-6386-11ea-bacb-0242ac11000a has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:55:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-cxm7q" for this suite. Mar 11 10:55:34.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:55:34.828: INFO: namespace: e2e-tests-pods-cxm7q, resource: bindings, ignored listing per whitelist Mar 11 10:55:34.882: INFO: namespace e2e-tests-pods-cxm7q deletion completed in 22.118783501s • [SLOW TEST:24.203 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:55:34.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 11 10:55:37.554: INFO: Successfully updated pod "annotationupdated5af8567-6386-11ea-bacb-0242ac11000a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:55:39.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bf6sp" for this suite. Mar 11 10:56:01.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:56:01.686: INFO: namespace: e2e-tests-projected-bf6sp, resource: bindings, ignored listing per whitelist Mar 11 10:56:01.739: INFO: namespace e2e-tests-projected-bf6sp deletion completed in 22.157035412s • [SLOW TEST:26.857 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:56:01.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 11 10:56:01.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 11 10:56:01.937: INFO: stderr: "" Mar 11 10:56:01.937: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32774\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32774/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:56:01.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g8qzt" for this suite. Mar 11 10:56:07.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:56:07.976: INFO: namespace: e2e-tests-kubectl-g8qzt, resource: bindings, ignored listing per whitelist Mar 11 10:56:08.009: INFO: namespace e2e-tests-kubectl-g8qzt deletion completed in 6.068854318s • [SLOW TEST:6.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:56:08.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 11 10:56:08.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xwxvv' Mar 11 10:56:08.300: INFO: stderr: "" Mar 11 10:56:08.300: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 11 10:56:09.304: INFO: Selector matched 1 pods for map[app:redis] Mar 11 10:56:09.304: INFO: Found 0 / 1 Mar 11 10:56:10.303: INFO: Selector matched 1 pods for map[app:redis] Mar 11 10:56:10.304: INFO: Found 1 / 1 Mar 11 10:56:10.304: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 10:56:10.305: INFO: Selector matched 1 pods for map[app:redis] Mar 11 10:56:10.305: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 11 10:56:10.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv' Mar 11 10:56:10.384: INFO: stderr: "" Mar 11 10:56:10.384: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 10:56:09.495 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 10:56:09.495 # Server started, Redis version 3.2.12\n1:M 11 Mar 10:56:09.495 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 10:56:09.495 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 11 10:56:10.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv --tail=1' Mar 11 10:56:10.458: INFO: stderr: "" Mar 11 10:56:10.458: INFO: stdout: "1:M 11 Mar 10:56:09.495 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 11 10:56:10.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv --limit-bytes=1' Mar 11 10:56:10.535: INFO: stderr: "" Mar 11 10:56:10.535: INFO: stdout: " " STEP: exposing timestamps Mar 11 10:56:10.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv --tail=1 --timestamps' Mar 11 10:56:10.608: INFO: stderr: "" Mar 11 10:56:10.609: INFO: stdout: "2020-03-11T10:56:09.495958389Z 1:M 11 Mar 10:56:09.495 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 11 10:56:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv --since=1s' Mar 11 10:56:13.234: INFO: stderr: "" Mar 11 10:56:13.234: INFO: stdout: "" Mar 11 10:56:13.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-cfntz redis-master --namespace=e2e-tests-kubectl-xwxvv --since=24h' Mar 11 10:56:13.314: INFO: stderr: "" Mar 11 10:56:13.314: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 10:56:09.495 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 10:56:09.495 # Server started, Redis version 3.2.12\n1:M 11 Mar 10:56:09.495 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 10:56:09.495 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 11 10:56:13.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xwxvv' Mar 11 10:56:13.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 10:56:13.397: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 11 10:56:13.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xwxvv' Mar 11 10:56:13.479: INFO: stderr: "No resources found.\n" Mar 11 10:56:13.479: INFO: stdout: "" Mar 11 10:56:13.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xwxvv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 10:56:13.552: INFO: stderr: "" Mar 11 10:56:13.552: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:56:13.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xwxvv" for this suite. Mar 11 10:56:19.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:56:19.609: INFO: namespace: e2e-tests-kubectl-xwxvv, resource: bindings, ignored listing per whitelist Mar 11 10:56:19.663: INFO: namespace e2e-tests-kubectl-xwxvv deletion completed in 6.108088557s • [SLOW TEST:11.654 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:56:19.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j8nhb.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j8nhb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j8nhb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j8nhb.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j8nhb.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j8nhb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 10:56:23.870: INFO: DNS probes using e2e-tests-dns-j8nhb/dns-test-f05d732e-6386-11ea-bacb-0242ac11000a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:56:23.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-j8nhb" for this suite. Mar 11 10:56:29.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:56:30.019: INFO: namespace: e2e-tests-dns-j8nhb, resource: bindings, ignored listing per whitelist Mar 11 10:56:30.045: INFO: namespace e2e-tests-dns-j8nhb deletion completed in 6.134945466s • [SLOW TEST:10.382 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:56:30.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-f691f9c3-6386-11ea-bacb-0242ac11000a STEP: Creating secret with name s-test-opt-upd-f691fa16-6386-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f691f9c3-6386-11ea-bacb-0242ac11000a STEP: Updating secret s-test-opt-upd-f691fa16-6386-11ea-bacb-0242ac11000a STEP: Creating secret with name s-test-opt-create-f691fa33-6386-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:57:48.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rffvz" for this suite. Mar 11 10:58:10.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:58:10.632: INFO: namespace: e2e-tests-projected-rffvz, resource: bindings, ignored listing per whitelist Mar 11 10:58:10.694: INFO: namespace e2e-tests-projected-rffvz deletion completed in 22.087497169s • [SLOW TEST:100.650 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:58:10.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 11 10:58:13.289: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a" Mar 11 10:58:13.289: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a" in namespace "e2e-tests-pods-tcwkz" to be "terminated due to deadline exceeded" Mar 11 10:58:13.295: INFO: Pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 5.789548ms Mar 11 10:58:15.298: INFO: Pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.009268227s Mar 11 10:58:17.302: INFO: Pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.012872272s Mar 11 10:58:17.302: INFO: Pod "pod-update-activedeadlineseconds-3284abe5-6387-11ea-bacb-0242ac11000a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:58:17.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tcwkz" for this suite. Mar 11 10:58:23.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:58:23.336: INFO: namespace: e2e-tests-pods-tcwkz, resource: bindings, ignored listing per whitelist Mar 11 10:58:23.388: INFO: namespace e2e-tests-pods-tcwkz deletion completed in 6.083395774s • [SLOW TEST:12.694 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:58:23.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 11 10:58:23.456: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:58:27.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9dmph" for this suite. Mar 11 10:58:49.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:58:50.024: INFO: namespace: e2e-tests-init-container-9dmph, resource: bindings, ignored listing per whitelist Mar 11 10:58:50.079: INFO: namespace e2e-tests-init-container-9dmph deletion completed in 22.099454858s • [SLOW TEST:26.691 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:58:50.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4a01fcc7-6387-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 10:58:50.196: INFO: Waiting up to 5m0s for pod "pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-wd5m4" to be "success or failure" Mar 11 10:58:50.200: INFO: Pod "pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604013ms Mar 11 10:58:52.218: INFO: Pod "pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021324865s STEP: Saw pod success Mar 11 10:58:52.218: INFO: Pod "pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:58:52.220: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 10:58:52.237: INFO: Waiting for pod pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a to disappear Mar 11 10:58:52.242: INFO: Pod pod-secrets-4a02ab96-6387-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:58:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wd5m4" for this suite. Mar 11 10:58:58.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:58:58.297: INFO: namespace: e2e-tests-secrets-wd5m4, resource: bindings, ignored listing per whitelist Mar 11 10:58:58.341: INFO: namespace e2e-tests-secrets-wd5m4 deletion completed in 6.095165086s • [SLOW TEST:8.262 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:58:58.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 11 10:59:00.975: INFO: Successfully updated pod "labelsupdate4ef02673-6387-11ea-bacb-0242ac11000a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:59:03.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-66gj4" for this suite. Mar 11 10:59:25.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:59:25.050: INFO: namespace: e2e-tests-downward-api-66gj4, resource: bindings, ignored listing per whitelist Mar 11 10:59:25.090: INFO: namespace e2e-tests-downward-api-66gj4 deletion completed in 22.085167593s • [SLOW TEST:26.749 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:59:25.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 11 10:59:25.168: INFO: Waiting up to 5m0s for pod "pod-5edf0ec4-6387-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-dvfdg" to be "success or failure" Mar 11 10:59:25.171: INFO: Pod "pod-5edf0ec4-6387-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547897ms Mar 11 10:59:27.177: INFO: Pod "pod-5edf0ec4-6387-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00896327s STEP: Saw pod success Mar 11 10:59:27.177: INFO: Pod "pod-5edf0ec4-6387-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 10:59:27.180: INFO: Trying to get logs from node hunter-worker2 pod pod-5edf0ec4-6387-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 10:59:27.197: INFO: Waiting for pod pod-5edf0ec4-6387-11ea-bacb-0242ac11000a to disappear Mar 11 10:59:27.202: INFO: Pod pod-5edf0ec4-6387-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:59:27.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dvfdg" for this suite. Mar 11 10:59:33.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 10:59:33.254: INFO: namespace: e2e-tests-emptydir-dvfdg, resource: bindings, ignored listing per whitelist Mar 11 10:59:33.306: INFO: namespace e2e-tests-emptydir-dvfdg deletion completed in 6.0942318s • [SLOW TEST:8.215 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 10:59:33.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-l79l8 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 10:59:33.368: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 10:59:57.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostName&protocol=udp&host=10.244.2.3&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-l79l8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 10:59:57.467: INFO: >>> kubeConfig: /root/.kube/config I0311 10:59:57.502196 6 log.go:172] (0xc0022720b0) (0xc000dc60a0) Create stream I0311 10:59:57.502221 6 log.go:172] (0xc0022720b0) (0xc000dc60a0) Stream added, broadcasting: 1 I0311 10:59:57.504258 6 log.go:172] (0xc0022720b0) Reply frame received for 1 I0311 10:59:57.504294 6 log.go:172] (0xc0022720b0) (0xc000dc6140) Create stream I0311 10:59:57.504305 6 log.go:172] (0xc0022720b0) (0xc000dc6140) Stream added, broadcasting: 3 I0311 10:59:57.505226 6 log.go:172] (0xc0022720b0) Reply frame received for 3 I0311 10:59:57.505247 6 log.go:172] (0xc0022720b0) (0xc001e88000) Create stream I0311 10:59:57.505255 6 log.go:172] (0xc0022720b0) (0xc001e88000) Stream added, broadcasting: 5 I0311 10:59:57.506098 6 log.go:172] (0xc0022720b0) Reply frame received for 5 I0311 10:59:57.578986 6 log.go:172] (0xc0022720b0) Data frame received for 3 I0311 10:59:57.579050 6 log.go:172] (0xc000dc6140) (3) Data frame handling I0311 10:59:57.579075 6 log.go:172] (0xc000dc6140) (3) Data frame sent I0311 10:59:57.579549 6 log.go:172] (0xc0022720b0) Data frame received for 3 I0311 10:59:57.579582 6 log.go:172] (0xc000dc6140) (3) Data frame handling I0311 10:59:57.579612 6 log.go:172] (0xc0022720b0) Data frame received for 5 I0311 10:59:57.579628 6 log.go:172] (0xc001e88000) (5) Data frame handling I0311 10:59:57.581213 6 log.go:172] (0xc0022720b0) Data frame received for 1 I0311 10:59:57.581246 6 log.go:172] (0xc000dc60a0) (1) Data frame handling I0311 10:59:57.581264 6 log.go:172] (0xc000dc60a0) (1) Data frame sent I0311 10:59:57.581277 6 log.go:172] (0xc0022720b0) (0xc000dc60a0) Stream removed, broadcasting: 1 I0311 10:59:57.581294 6 log.go:172] (0xc0022720b0) Go away received I0311 10:59:57.581508 6 log.go:172] (0xc0022720b0) (0xc000dc60a0) Stream removed, broadcasting: 1 I0311 10:59:57.581532 6 log.go:172] (0xc0022720b0) (0xc000dc6140) Stream removed, broadcasting: 3 I0311 10:59:57.581551 6 log.go:172] (0xc0022720b0) (0xc001e88000) Stream removed, broadcasting: 5 Mar 11 10:59:57.581: INFO: Waiting for endpoints: map[] Mar 11 10:59:57.584: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostName&protocol=udp&host=10.244.1.76&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-l79l8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 10:59:57.584: INFO: >>> kubeConfig: /root/.kube/config I0311 10:59:57.609895 6 log.go:172] (0xc000d9a6e0) (0xc001e883c0) Create stream I0311 10:59:57.609916 6 log.go:172] (0xc000d9a6e0) (0xc001e883c0) Stream added, broadcasting: 1 I0311 10:59:57.612447 6 log.go:172] (0xc000d9a6e0) Reply frame received for 1 I0311 10:59:57.612481 6 log.go:172] (0xc000d9a6e0) (0xc000dc6280) Create stream I0311 10:59:57.612492 6 log.go:172] (0xc000d9a6e0) (0xc000dc6280) Stream added, broadcasting: 3 I0311 10:59:57.613206 6 log.go:172] (0xc000d9a6e0) Reply frame received for 3 I0311 10:59:57.613237 6 log.go:172] (0xc000d9a6e0) (0xc00207a140) Create stream I0311 10:59:57.613248 6 log.go:172] (0xc000d9a6e0) (0xc00207a140) Stream added, broadcasting: 5 I0311 10:59:57.614020 6 log.go:172] (0xc000d9a6e0) Reply frame received for 5 I0311 10:59:57.689428 6 log.go:172] (0xc000d9a6e0) Data frame received for 3 I0311 10:59:57.689461 6 log.go:172] (0xc000dc6280) (3) Data frame handling I0311 10:59:57.689480 6 log.go:172] (0xc000dc6280) (3) Data frame sent I0311 10:59:57.689985 6 log.go:172] (0xc000d9a6e0) Data frame received for 3 I0311 10:59:57.690009 6 log.go:172] (0xc000dc6280) (3) Data frame handling I0311 10:59:57.690039 6 log.go:172] (0xc000d9a6e0) Data frame received for 5 I0311 10:59:57.690054 6 log.go:172] (0xc00207a140) (5) Data frame handling I0311 10:59:57.691243 6 log.go:172] (0xc000d9a6e0) Data frame received for 1 I0311 10:59:57.691256 6 log.go:172] (0xc001e883c0) (1) Data frame handling I0311 10:59:57.691269 6 log.go:172] (0xc001e883c0) (1) Data frame sent I0311 10:59:57.691281 6 log.go:172] (0xc000d9a6e0) (0xc001e883c0) Stream removed, broadcasting: 1 I0311 10:59:57.691295 6 log.go:172] (0xc000d9a6e0) Go away received I0311 10:59:57.691421 6 log.go:172] (0xc000d9a6e0) (0xc001e883c0) Stream removed, broadcasting: 1 I0311 10:59:57.691439 6 log.go:172] (0xc000d9a6e0) (0xc000dc6280) Stream removed, broadcasting: 3 I0311 10:59:57.691450 6 log.go:172] (0xc000d9a6e0) (0xc00207a140) Stream removed, broadcasting: 5 Mar 11 10:59:57.691: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 10:59:57.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-l79l8" for this suite. Mar 11 11:00:19.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:00:19.745: INFO: namespace: e2e-tests-pod-network-test-l79l8, resource: bindings, ignored listing per whitelist Mar 11 11:00:19.807: INFO: namespace e2e-tests-pod-network-test-l79l8 deletion completed in 22.112555036s • [SLOW TEST:46.501 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:00:19.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7f7f3d69-6387-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:00:19.934: INFO: Waiting up to 5m0s for pod "pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-cv2qv" to be "success or failure" Mar 11 11:00:19.938: INFO: Pod "pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879842ms Mar 11 11:00:21.942: INFO: Pod "pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008114685s Mar 11 11:00:23.945: INFO: Pod "pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011722447s STEP: Saw pod success Mar 11 11:00:23.945: INFO: Pod "pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:00:23.948: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:00:23.963: INFO: Waiting for pod pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a to disappear Mar 11 11:00:23.967: INFO: Pod pod-secrets-7f824ab7-6387-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:00:23.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cv2qv" for this suite. Mar 11 11:00:29.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:00:30.016: INFO: namespace: e2e-tests-secrets-cv2qv, resource: bindings, ignored listing per whitelist Mar 11 11:00:30.070: INFO: namespace e2e-tests-secrets-cv2qv deletion completed in 6.095189245s • [SLOW TEST:10.263 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:00:30.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:00:30.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-vz6f7" to be "success or failure" Mar 11 11:00:30.178: INFO: Pod "downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192955ms Mar 11 11:00:32.193: INFO: Pod "downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021420009s STEP: Saw pod success Mar 11 11:00:32.193: INFO: Pod "downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:00:32.196: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:00:32.216: INFO: Waiting for pod downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a to disappear Mar 11 11:00:32.221: INFO: Pod downwardapi-volume-859ddaf2-6387-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:00:32.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vz6f7" for this suite. Mar 11 11:00:38.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:00:38.318: INFO: namespace: e2e-tests-projected-vz6f7, resource: bindings, ignored listing per whitelist Mar 11 11:00:38.320: INFO: namespace e2e-tests-projected-vz6f7 deletion completed in 6.095038486s • [SLOW TEST:8.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:00:38.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-rkzlq [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-rkzlq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-rkzlq Mar 11 11:00:38.413: INFO: Found 0 stateful pods, waiting for 1 Mar 11 11:00:48.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 11 11:00:48.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 11:00:48.639: INFO: stderr: "I0311 11:00:48.551146 581 log.go:172] (0xc00083c2c0) (0xc0006f2640) Create stream\nI0311 11:00:48.551190 581 log.go:172] (0xc00083c2c0) (0xc0006f2640) Stream added, broadcasting: 1\nI0311 11:00:48.552912 581 log.go:172] (0xc00083c2c0) Reply frame received for 1\nI0311 11:00:48.552959 581 log.go:172] (0xc00083c2c0) (0xc0005c8be0) Create stream\nI0311 11:00:48.552980 581 log.go:172] (0xc00083c2c0) (0xc0005c8be0) Stream added, broadcasting: 3\nI0311 11:00:48.553721 581 log.go:172] (0xc00083c2c0) Reply frame received for 3\nI0311 11:00:48.553750 581 log.go:172] (0xc00083c2c0) (0xc0006b6000) Create stream\nI0311 11:00:48.553759 581 log.go:172] (0xc00083c2c0) (0xc0006b6000) Stream added, broadcasting: 5\nI0311 11:00:48.554581 581 log.go:172] (0xc00083c2c0) Reply frame received for 5\nI0311 11:00:48.633874 581 log.go:172] (0xc00083c2c0) Data frame received for 5\nI0311 11:00:48.633911 581 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0311 11:00:48.633940 581 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0311 11:00:48.633950 581 log.go:172] (0xc0005c8be0) (3) Data frame handling\nI0311 11:00:48.633959 581 log.go:172] (0xc0005c8be0) (3) Data frame sent\nI0311 11:00:48.634017 581 log.go:172] (0xc00083c2c0) Data frame received for 3\nI0311 11:00:48.634040 581 log.go:172] (0xc0005c8be0) (3) Data frame handling\nI0311 11:00:48.636027 581 log.go:172] (0xc00083c2c0) Data frame received for 1\nI0311 11:00:48.636044 581 log.go:172] (0xc0006f2640) (1) Data frame handling\nI0311 11:00:48.636064 581 log.go:172] (0xc0006f2640) (1) Data frame sent\nI0311 11:00:48.636086 581 log.go:172] (0xc00083c2c0) (0xc0006f2640) Stream removed, broadcasting: 1\nI0311 11:00:48.636123 581 log.go:172] (0xc00083c2c0) Go away received\nI0311 11:00:48.636280 581 log.go:172] (0xc00083c2c0) (0xc0006f2640) Stream removed, broadcasting: 1\nI0311 11:00:48.636304 581 log.go:172] (0xc00083c2c0) (0xc0005c8be0) Stream removed, broadcasting: 3\nI0311 11:00:48.636322 581 log.go:172] (0xc00083c2c0) (0xc0006b6000) Stream removed, broadcasting: 5\n" Mar 11 11:00:48.639: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 11:00:48.639: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 11:00:48.642: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 11 11:00:58.646: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 11:00:58.646: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 11:00:58.681: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999591s Mar 11 11:00:59.685: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974957688s Mar 11 11:01:00.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970837024s Mar 11 11:01:01.694: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966513491s Mar 11 11:01:02.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962056202s Mar 11 11:01:03.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.95656962s Mar 11 11:01:04.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.952626868s Mar 11 11:01:05.712: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.948363754s Mar 11 11:01:06.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.944056903s Mar 11 11:01:07.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 939.369776ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-rkzlq Mar 11 11:01:08.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 11:01:08.934: INFO: stderr: "I0311 11:01:08.875501 604 log.go:172] (0xc000162790) (0xc00062f400) Create stream\nI0311 11:01:08.875547 604 log.go:172] (0xc000162790) (0xc00062f400) Stream added, broadcasting: 1\nI0311 11:01:08.877577 604 log.go:172] (0xc000162790) Reply frame received for 1\nI0311 11:01:08.877619 604 log.go:172] (0xc000162790) (0xc0003be000) Create stream\nI0311 11:01:08.877631 604 log.go:172] (0xc000162790) (0xc0003be000) Stream added, broadcasting: 3\nI0311 11:01:08.878557 604 log.go:172] (0xc000162790) Reply frame received for 3\nI0311 11:01:08.878601 604 log.go:172] (0xc000162790) (0xc0003ce000) Create stream\nI0311 11:01:08.878612 604 log.go:172] (0xc000162790) (0xc0003ce000) Stream added, broadcasting: 5\nI0311 11:01:08.880006 604 log.go:172] (0xc000162790) Reply frame received for 5\nI0311 11:01:08.929114 604 log.go:172] (0xc000162790) Data frame received for 3\nI0311 11:01:08.929147 604 log.go:172] (0xc0003be000) (3) Data frame handling\nI0311 11:01:08.929160 604 log.go:172] (0xc0003be000) (3) Data frame sent\nI0311 11:01:08.929172 604 log.go:172] (0xc000162790) Data frame received for 3\nI0311 11:01:08.929182 604 log.go:172] (0xc0003be000) (3) Data frame handling\nI0311 11:01:08.929221 604 log.go:172] (0xc000162790) Data frame received for 5\nI0311 11:01:08.929253 604 log.go:172] (0xc0003ce000) (5) Data frame handling\nI0311 11:01:08.930833 604 log.go:172] (0xc000162790) Data frame received for 1\nI0311 11:01:08.930861 604 log.go:172] (0xc00062f400) (1) Data frame handling\nI0311 11:01:08.930883 604 log.go:172] (0xc00062f400) (1) Data frame sent\nI0311 11:01:08.930903 604 log.go:172] (0xc000162790) (0xc00062f400) Stream removed, broadcasting: 1\nI0311 11:01:08.930926 604 log.go:172] (0xc000162790) Go away received\nI0311 11:01:08.931117 604 log.go:172] (0xc000162790) (0xc00062f400) Stream removed, broadcasting: 1\nI0311 11:01:08.931137 604 log.go:172] (0xc000162790) (0xc0003be000) Stream removed, broadcasting: 3\nI0311 11:01:08.931148 604 log.go:172] (0xc000162790) (0xc0003ce000) Stream removed, broadcasting: 5\n" Mar 11 11:01:08.934: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 11:01:08.934: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 11:01:08.947: INFO: Found 1 stateful pods, waiting for 3 Mar 11 11:01:18.952: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 11:01:18.953: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 11:01:18.953: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 11 11:01:18.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 11:01:19.156: INFO: stderr: "I0311 11:01:19.093047 627 log.go:172] (0xc0007ea370) (0xc00071c640) Create stream\nI0311 11:01:19.093094 627 log.go:172] (0xc0007ea370) (0xc00071c640) Stream added, broadcasting: 1\nI0311 11:01:19.094816 627 log.go:172] (0xc0007ea370) Reply frame received for 1\nI0311 11:01:19.094863 627 log.go:172] (0xc0007ea370) (0xc00078ed20) Create stream\nI0311 11:01:19.094874 627 log.go:172] (0xc0007ea370) (0xc00078ed20) Stream added, broadcasting: 3\nI0311 11:01:19.095557 627 log.go:172] (0xc0007ea370) Reply frame received for 3\nI0311 11:01:19.095591 627 log.go:172] (0xc0007ea370) (0xc0006ba000) Create stream\nI0311 11:01:19.095604 627 log.go:172] (0xc0007ea370) (0xc0006ba000) Stream added, broadcasting: 5\nI0311 11:01:19.096432 627 log.go:172] (0xc0007ea370) Reply frame received for 5\nI0311 11:01:19.152563 627 log.go:172] (0xc0007ea370) Data frame received for 5\nI0311 11:01:19.152606 627 log.go:172] (0xc0007ea370) Data frame received for 3\nI0311 11:01:19.152635 627 log.go:172] (0xc00078ed20) (3) Data frame handling\nI0311 11:01:19.152649 627 log.go:172] (0xc0006ba000) (5) Data frame handling\nI0311 11:01:19.152674 627 log.go:172] (0xc00078ed20) (3) Data frame sent\nI0311 11:01:19.152682 627 log.go:172] (0xc0007ea370) Data frame received for 3\nI0311 11:01:19.152693 627 log.go:172] (0xc00078ed20) (3) Data frame handling\nI0311 11:01:19.153498 627 log.go:172] (0xc0007ea370) Data frame received for 1\nI0311 11:01:19.153512 627 log.go:172] (0xc00071c640) (1) Data frame handling\nI0311 11:01:19.153524 627 log.go:172] (0xc00071c640) (1) Data frame sent\nI0311 11:01:19.153533 627 log.go:172] (0xc0007ea370) (0xc00071c640) Stream removed, broadcasting: 1\nI0311 11:01:19.153568 627 log.go:172] (0xc0007ea370) Go away received\nI0311 11:01:19.153704 627 log.go:172] (0xc0007ea370) (0xc00071c640) Stream removed, broadcasting: 1\nI0311 11:01:19.153717 627 log.go:172] (0xc0007ea370) (0xc00078ed20) Stream removed, broadcasting: 3\nI0311 11:01:19.153725 627 log.go:172] (0xc0007ea370) (0xc0006ba000) Stream removed, broadcasting: 5\n" Mar 11 11:01:19.156: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 11:01:19.156: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 11:01:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 11:01:19.370: INFO: stderr: "I0311 11:01:19.259071 650 log.go:172] (0xc0007f42c0) (0xc0006e0640) Create stream\nI0311 11:01:19.259116 650 log.go:172] (0xc0007f42c0) (0xc0006e0640) Stream added, broadcasting: 1\nI0311 11:01:19.261148 650 log.go:172] (0xc0007f42c0) Reply frame received for 1\nI0311 11:01:19.261184 650 log.go:172] (0xc0007f42c0) (0xc0005d6e60) Create stream\nI0311 11:01:19.261192 650 log.go:172] (0xc0007f42c0) (0xc0005d6e60) Stream added, broadcasting: 3\nI0311 11:01:19.262054 650 log.go:172] (0xc0007f42c0) Reply frame received for 3\nI0311 11:01:19.262083 650 log.go:172] (0xc0007f42c0) (0xc0006e06e0) Create stream\nI0311 11:01:19.262090 650 log.go:172] (0xc0007f42c0) (0xc0006e06e0) Stream added, broadcasting: 5\nI0311 11:01:19.262815 650 log.go:172] (0xc0007f42c0) Reply frame received for 5\nI0311 11:01:19.367634 650 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0311 11:01:19.367666 650 log.go:172] (0xc0005d6e60) (3) Data frame handling\nI0311 11:01:19.367686 650 log.go:172] (0xc0005d6e60) (3) Data frame sent\nI0311 11:01:19.367696 650 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0311 11:01:19.367704 650 log.go:172] (0xc0005d6e60) (3) Data frame handling\nI0311 11:01:19.367723 650 log.go:172] (0xc0007f42c0) Data frame received for 5\nI0311 11:01:19.367736 650 log.go:172] (0xc0006e06e0) (5) Data frame handling\nI0311 11:01:19.368764 650 log.go:172] (0xc0007f42c0) Data frame received for 1\nI0311 11:01:19.368776 650 log.go:172] (0xc0006e0640) (1) Data frame handling\nI0311 11:01:19.368781 650 log.go:172] (0xc0006e0640) (1) Data frame sent\nI0311 11:01:19.368788 650 log.go:172] (0xc0007f42c0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0311 11:01:19.368832 650 log.go:172] (0xc0007f42c0) Go away received\nI0311 11:01:19.368923 650 log.go:172] (0xc0007f42c0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0311 11:01:19.368938 650 log.go:172] (0xc0007f42c0) (0xc0005d6e60) Stream removed, broadcasting: 3\nI0311 11:01:19.368948 650 log.go:172] (0xc0007f42c0) (0xc0006e06e0) Stream removed, broadcasting: 5\n" Mar 11 11:01:19.371: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 11:01:19.371: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 11:01:19.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 11:01:19.521: INFO: stderr: "I0311 11:01:19.446893 671 log.go:172] (0xc0008442c0) (0xc00074c640) Create stream\nI0311 11:01:19.446923 671 log.go:172] (0xc0008442c0) (0xc00074c640) Stream added, broadcasting: 1\nI0311 11:01:19.448131 671 log.go:172] (0xc0008442c0) Reply frame received for 1\nI0311 11:01:19.448156 671 log.go:172] (0xc0008442c0) (0xc000604be0) Create stream\nI0311 11:01:19.448167 671 log.go:172] (0xc0008442c0) (0xc000604be0) Stream added, broadcasting: 3\nI0311 11:01:19.448679 671 log.go:172] (0xc0008442c0) Reply frame received for 3\nI0311 11:01:19.448708 671 log.go:172] (0xc0008442c0) (0xc00039a000) Create stream\nI0311 11:01:19.448718 671 log.go:172] (0xc0008442c0) (0xc00039a000) Stream added, broadcasting: 5\nI0311 11:01:19.449238 671 log.go:172] (0xc0008442c0) Reply frame received for 5\nI0311 11:01:19.517974 671 log.go:172] (0xc0008442c0) Data frame received for 5\nI0311 11:01:19.517999 671 log.go:172] (0xc00039a000) (5) Data frame handling\nI0311 11:01:19.518044 671 log.go:172] (0xc0008442c0) Data frame received for 3\nI0311 11:01:19.518070 671 log.go:172] (0xc000604be0) (3) Data frame handling\nI0311 11:01:19.518084 671 log.go:172] (0xc000604be0) (3) Data frame sent\nI0311 11:01:19.518093 671 log.go:172] (0xc0008442c0) Data frame received for 3\nI0311 11:01:19.518102 671 log.go:172] (0xc000604be0) (3) Data frame handling\nI0311 11:01:19.519106 671 log.go:172] (0xc0008442c0) Data frame received for 1\nI0311 11:01:19.519119 671 log.go:172] (0xc00074c640) (1) Data frame handling\nI0311 11:01:19.519125 671 log.go:172] (0xc00074c640) (1) Data frame sent\nI0311 11:01:19.519133 671 log.go:172] (0xc0008442c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0311 11:01:19.519146 671 log.go:172] (0xc0008442c0) Go away received\nI0311 11:01:19.519293 671 log.go:172] (0xc0008442c0) (0xc00074c640) Stream removed, broadcasting: 1\nI0311 11:01:19.519309 671 log.go:172] (0xc0008442c0) (0xc000604be0) Stream removed, broadcasting: 3\nI0311 11:01:19.519315 671 log.go:172] (0xc0008442c0) (0xc00039a000) Stream removed, broadcasting: 5\n" Mar 11 11:01:19.522: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 11:01:19.522: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 11:01:19.522: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 11:01:19.524: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 11 11:01:29.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 11:01:29.531: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 11 11:01:29.531: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 11 11:01:29.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999761s Mar 11 11:01:30.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968486486s Mar 11 11:01:31.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.963932365s Mar 11 11:01:32.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959156835s Mar 11 11:01:33.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.954260141s Mar 11 11:01:34.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.949650624s Mar 11 11:01:35.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.947219582s Mar 11 11:01:36.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.940411186s Mar 11 11:01:37.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93646245s Mar 11 11:01:38.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 932.28674ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-rkzlq Mar 11 11:01:39.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 11:01:39.797: INFO: stderr: "I0311 11:01:39.733394 694 log.go:172] (0xc00013c840) (0xc0005b1400) Create stream\nI0311 11:01:39.733439 694 log.go:172] (0xc00013c840) (0xc0005b1400) Stream added, broadcasting: 1\nI0311 11:01:39.735194 694 log.go:172] (0xc00013c840) Reply frame received for 1\nI0311 11:01:39.735222 694 log.go:172] (0xc00013c840) (0xc0007b6000) Create stream\nI0311 11:01:39.735230 694 log.go:172] (0xc00013c840) (0xc0007b6000) Stream added, broadcasting: 3\nI0311 11:01:39.736173 694 log.go:172] (0xc00013c840) Reply frame received for 3\nI0311 11:01:39.736227 694 log.go:172] (0xc00013c840) (0xc0005ca000) Create stream\nI0311 11:01:39.736244 694 log.go:172] (0xc00013c840) (0xc0005ca000) Stream added, broadcasting: 5\nI0311 11:01:39.737259 694 log.go:172] (0xc00013c840) Reply frame received for 5\nI0311 11:01:39.792717 694 log.go:172] (0xc00013c840) Data frame received for 5\nI0311 11:01:39.792748 694 log.go:172] (0xc0005ca000) (5) Data frame handling\nI0311 11:01:39.792770 694 log.go:172] (0xc00013c840) Data frame received for 3\nI0311 11:01:39.792779 694 log.go:172] (0xc0007b6000) (3) Data frame handling\nI0311 11:01:39.792790 694 log.go:172] (0xc0007b6000) (3) Data frame sent\nI0311 11:01:39.792803 694 log.go:172] (0xc00013c840) Data frame received for 3\nI0311 11:01:39.792811 694 log.go:172] (0xc0007b6000) (3) Data frame handling\nI0311 11:01:39.793775 694 log.go:172] (0xc00013c840) Data frame received for 1\nI0311 11:01:39.793802 694 log.go:172] (0xc0005b1400) (1) Data frame handling\nI0311 11:01:39.793815 694 log.go:172] (0xc0005b1400) (1) Data frame sent\nI0311 11:01:39.793828 694 log.go:172] (0xc00013c840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0311 11:01:39.793851 694 log.go:172] (0xc00013c840) Go away received\nI0311 11:01:39.794055 694 log.go:172] (0xc00013c840) (0xc0005b1400) Stream removed, broadcasting: 1\nI0311 11:01:39.794079 694 log.go:172] (0xc00013c840) (0xc0007b6000) Stream removed, broadcasting: 3\nI0311 11:01:39.794090 694 log.go:172] (0xc00013c840) (0xc0005ca000) Stream removed, broadcasting: 5\n" Mar 11 11:01:39.797: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 11:01:39.797: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 11:01:39.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 11:01:39.997: INFO: stderr: "I0311 11:01:39.927125 717 log.go:172] (0xc0001388f0) (0xc00077c640) Create stream\nI0311 11:01:39.927162 717 log.go:172] (0xc0001388f0) (0xc00077c640) Stream added, broadcasting: 1\nI0311 11:01:39.929789 717 log.go:172] (0xc0001388f0) Reply frame received for 1\nI0311 11:01:39.929816 717 log.go:172] (0xc0001388f0) (0xc000688dc0) Create stream\nI0311 11:01:39.929822 717 log.go:172] (0xc0001388f0) (0xc000688dc0) Stream added, broadcasting: 3\nI0311 11:01:39.930846 717 log.go:172] (0xc0001388f0) Reply frame received for 3\nI0311 11:01:39.930894 717 log.go:172] (0xc0001388f0) (0xc00077c6e0) Create stream\nI0311 11:01:39.930906 717 log.go:172] (0xc0001388f0) (0xc00077c6e0) Stream added, broadcasting: 5\nI0311 11:01:39.931731 717 log.go:172] (0xc0001388f0) Reply frame received for 5\nI0311 11:01:39.993174 717 log.go:172] (0xc0001388f0) Data frame received for 5\nI0311 11:01:39.993226 717 log.go:172] (0xc0001388f0) Data frame received for 3\nI0311 11:01:39.993271 717 log.go:172] (0xc000688dc0) (3) Data frame handling\nI0311 11:01:39.993289 717 log.go:172] (0xc000688dc0) (3) Data frame sent\nI0311 11:01:39.993300 717 log.go:172] (0xc0001388f0) Data frame received for 3\nI0311 11:01:39.993307 717 log.go:172] (0xc000688dc0) (3) Data frame handling\nI0311 11:01:39.993348 717 log.go:172] (0xc00077c6e0) (5) Data frame handling\nI0311 11:01:39.994263 717 log.go:172] (0xc0001388f0) Data frame received for 1\nI0311 11:01:39.994288 717 log.go:172] (0xc00077c640) (1) Data frame handling\nI0311 11:01:39.994302 717 log.go:172] (0xc00077c640) (1) Data frame sent\nI0311 11:01:39.994319 717 log.go:172] (0xc0001388f0) (0xc00077c640) Stream removed, broadcasting: 1\nI0311 11:01:39.994350 717 log.go:172] (0xc0001388f0) Go away received\nI0311 11:01:39.994546 717 log.go:172] (0xc0001388f0) (0xc00077c640) Stream removed, broadcasting: 1\nI0311 11:01:39.994569 717 log.go:172] (0xc0001388f0) (0xc000688dc0) Stream removed, broadcasting: 3\nI0311 11:01:39.994585 717 log.go:172] (0xc0001388f0) (0xc00077c6e0) Stream removed, broadcasting: 5\n" Mar 11 11:01:39.998: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 11:01:39.998: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 11:01:39.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rkzlq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 11:01:40.168: INFO: stderr: "I0311 11:01:40.114797 740 log.go:172] (0xc0008b62c0) (0xc000730640) Create stream\nI0311 11:01:40.114836 740 log.go:172] (0xc0008b62c0) (0xc000730640) Stream added, broadcasting: 1\nI0311 11:01:40.116451 740 log.go:172] (0xc0008b62c0) Reply frame received for 1\nI0311 11:01:40.116477 740 log.go:172] (0xc0008b62c0) (0xc0005e6be0) Create stream\nI0311 11:01:40.116484 740 log.go:172] (0xc0008b62c0) (0xc0005e6be0) Stream added, broadcasting: 3\nI0311 11:01:40.117047 740 log.go:172] (0xc0008b62c0) Reply frame received for 3\nI0311 11:01:40.117066 740 log.go:172] (0xc0008b62c0) (0xc0007306e0) Create stream\nI0311 11:01:40.117072 740 log.go:172] (0xc0008b62c0) (0xc0007306e0) Stream added, broadcasting: 5\nI0311 11:01:40.117696 740 log.go:172] (0xc0008b62c0) Reply frame received for 5\nI0311 11:01:40.164271 740 log.go:172] (0xc0008b62c0) Data frame received for 5\nI0311 11:01:40.164297 740 log.go:172] (0xc0007306e0) (5) Data frame handling\nI0311 11:01:40.164322 740 log.go:172] (0xc0008b62c0) Data frame received for 3\nI0311 11:01:40.164343 740 log.go:172] (0xc0005e6be0) (3) Data frame handling\nI0311 11:01:40.164358 740 log.go:172] (0xc0005e6be0) (3) Data frame sent\nI0311 11:01:40.164367 740 log.go:172] (0xc0008b62c0) Data frame received for 3\nI0311 11:01:40.164375 740 log.go:172] (0xc0005e6be0) (3) Data frame handling\nI0311 11:01:40.165715 740 log.go:172] (0xc0008b62c0) Data frame received for 1\nI0311 11:01:40.165732 740 log.go:172] (0xc000730640) (1) Data frame handling\nI0311 11:01:40.165745 740 log.go:172] (0xc000730640) (1) Data frame sent\nI0311 11:01:40.165760 740 log.go:172] (0xc0008b62c0) (0xc000730640) Stream removed, broadcasting: 1\nI0311 11:01:40.165775 740 log.go:172] (0xc0008b62c0) Go away received\nI0311 11:01:40.165919 740 log.go:172] (0xc0008b62c0) (0xc000730640) Stream removed, broadcasting: 1\nI0311 11:01:40.165937 740 log.go:172] (0xc0008b62c0) (0xc0005e6be0) Stream removed, broadcasting: 3\nI0311 11:01:40.165949 740 log.go:172] (0xc0008b62c0) (0xc0007306e0) Stream removed, broadcasting: 5\n" Mar 11 11:01:40.168: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 11:01:40.168: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 11:01:40.168: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 11 11:02:10.183: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rkzlq Mar 11 11:02:10.186: INFO: Scaling statefulset ss to 0 Mar 11 11:02:10.194: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 11:02:10.197: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:02:10.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-rkzlq" for this suite. Mar 11 11:02:16.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:02:16.332: INFO: namespace: e2e-tests-statefulset-rkzlq, resource: bindings, ignored listing per whitelist Mar 11 11:02:16.356: INFO: namespace e2e-tests-statefulset-rkzlq deletion completed in 6.080027375s • [SLOW TEST:98.035 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:02:16.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-k4kjn Mar 11 11:02:18.488: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-k4kjn STEP: checking the pod's current state and verifying that restartCount is present Mar 11 11:02:18.489: INFO: Initial restart count of pod liveness-http is 0 Mar 11 11:02:32.516: INFO: Restart count of pod e2e-tests-container-probe-k4kjn/liveness-http is now 1 (14.026652823s elapsed) Mar 11 11:02:52.617: INFO: Restart count of pod e2e-tests-container-probe-k4kjn/liveness-http is now 2 (34.127555253s elapsed) Mar 11 11:03:10.667: INFO: Restart count of pod e2e-tests-container-probe-k4kjn/liveness-http is now 3 (52.178191574s elapsed) Mar 11 11:03:30.711: INFO: Restart count of pod e2e-tests-container-probe-k4kjn/liveness-http is now 4 (1m12.22220479s elapsed) Mar 11 11:04:42.937: INFO: Restart count of pod e2e-tests-container-probe-k4kjn/liveness-http is now 5 (2m24.447982144s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:04:42.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-k4kjn" for this suite. Mar 11 11:04:49.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:04:49.026: INFO: namespace: e2e-tests-container-probe-k4kjn, resource: bindings, ignored listing per whitelist Mar 11 11:04:49.087: INFO: namespace e2e-tests-container-probe-k4kjn deletion completed in 6.085999238s • [SLOW TEST:152.731 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:04:49.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:04:49.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-skvgh" to be "success or failure" Mar 11 11:04:49.196: INFO: Pod "downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.621986ms Mar 11 11:04:51.201: INFO: Pod "downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.023836575s Mar 11 11:04:53.205: INFO: Pod "downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027965485s STEP: Saw pod success Mar 11 11:04:53.205: INFO: Pod "downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:04:53.208: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:04:53.249: INFO: Waiting for pod downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:04:53.255: INFO: Pod downwardapi-volume-1ffeec74-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:04:53.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-skvgh" for this suite. Mar 11 11:04:59.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:04:59.312: INFO: namespace: e2e-tests-projected-skvgh, resource: bindings, ignored listing per whitelist Mar 11 11:04:59.327: INFO: namespace e2e-tests-projected-skvgh deletion completed in 6.068765139s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:04:59.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 11 11:04:59.387: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 11 11:04:59.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:01.139: INFO: stderr: "" Mar 11 11:05:01.139: INFO: stdout: "service/redis-slave created\n" Mar 11 11:05:01.139: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 11 11:05:01.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:01.432: INFO: stderr: "" Mar 11 11:05:01.432: INFO: stdout: "service/redis-master created\n" Mar 11 11:05:01.432: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 11 11:05:01.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:01.710: INFO: stderr: "" Mar 11 11:05:01.710: INFO: stdout: "service/frontend created\n" Mar 11 11:05:01.710: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 11 11:05:01.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:01.939: INFO: stderr: "" Mar 11 11:05:01.939: INFO: stdout: "deployment.extensions/frontend created\n" Mar 11 11:05:01.939: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 11 11:05:01.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:02.225: INFO: stderr: "" Mar 11 11:05:02.225: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 11 11:05:02.226: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 11 11:05:02.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:02.545: INFO: stderr: "" Mar 11 11:05:02.545: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 11 11:05:02.545: INFO: Waiting for all frontend pods to be Running. Mar 11 11:05:07.596: INFO: Waiting for frontend to serve content. Mar 11 11:05:07.612: INFO: Trying to add a new entry to the guestbook. Mar 11 11:05:07.626: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 11 11:05:07.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:07.819: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:07.819: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 11 11:05:07.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:07.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:07.943: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 11 11:05:07.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:08.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:08.069: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 11 11:05:08.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:08.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:08.137: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 11 11:05:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:08.233: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:08.233: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 11 11:05:08.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rn5g2' Mar 11 11:05:08.328: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:05:08.328: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:05:08.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rn5g2" for this suite. Mar 11 11:05:50.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:05:50.375: INFO: namespace: e2e-tests-kubectl-rn5g2, resource: bindings, ignored listing per whitelist Mar 11 11:05:50.401: INFO: namespace e2e-tests-kubectl-rn5g2 deletion completed in 42.067786146s • [SLOW TEST:51.074 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:05:50.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 11 11:05:50.509: INFO: Waiting up to 5m0s for pod "var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-var-expansion-kqsfn" to be "success or failure" Mar 11 11:05:50.513: INFO: Pod "var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.945826ms Mar 11 11:05:52.517: INFO: Pod "var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007783684s Mar 11 11:05:54.520: INFO: Pod "var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0111374s STEP: Saw pod success Mar 11 11:05:54.520: INFO: Pod "var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:05:54.523: INFO: Trying to get logs from node hunter-worker pod var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:05:54.574: INFO: Waiting for pod var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:05:54.577: INFO: Pod var-expansion-448dafdf-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:05:54.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kqsfn" for this suite. Mar 11 11:06:00.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:06:00.601: INFO: namespace: e2e-tests-var-expansion-kqsfn, resource: bindings, ignored listing per whitelist Mar 11 11:06:00.648: INFO: namespace e2e-tests-var-expansion-kqsfn deletion completed in 6.066533062s • [SLOW TEST:10.246 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:06:00.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:06:00.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:00.771: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 11:06:00.771: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 11 11:06:00.775: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 11 11:06:00.811: INFO: scanned /root for discovery docs: Mar 11 11:06:00.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:16.655: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 11 11:06:16.655: INFO: stdout: "Created e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4\nScaling up e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 11 11:06:16.655: INFO: stdout: "Created e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4\nScaling up e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 11 11:06:16.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:16.761: INFO: stderr: "" Mar 11 11:06:16.761: INFO: stdout: "e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4-hvksc e2e-test-nginx-rc-tk7g8 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Mar 11 11:06:21.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:21.887: INFO: stderr: "" Mar 11 11:06:21.888: INFO: stdout: "e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4-hvksc " Mar 11 11:06:21.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4-hvksc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:21.984: INFO: stderr: "" Mar 11 11:06:21.984: INFO: stdout: "true" Mar 11 11:06:21.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4-hvksc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:22.069: INFO: stderr: "" Mar 11 11:06:22.069: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 11 11:06:22.069: INFO: e2e-test-nginx-rc-e3abb1e65a2cfc1c978ea3e7b94152f4-hvksc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 11 11:06:22.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-j5t2l' Mar 11 11:06:22.154: INFO: stderr: "" Mar 11 11:06:22.154: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:06:22.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j5t2l" for this suite. Mar 11 11:06:44.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:06:44.186: INFO: namespace: e2e-tests-kubectl-j5t2l, resource: bindings, ignored listing per whitelist Mar 11 11:06:44.257: INFO: namespace e2e-tests-kubectl-j5t2l deletion completed in 22.100061145s • [SLOW TEST:43.609 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:06:44.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 11 11:06:44.365: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-884dq" to be "success or failure" Mar 11 11:06:44.395: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.948972ms Mar 11 11:06:46.398: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033097381s Mar 11 11:06:48.404: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038949614s STEP: Saw pod success Mar 11 11:06:48.404: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 11 11:06:48.406: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 11 11:06:48.448: INFO: Waiting for pod pod-host-path-test to disappear Mar 11 11:06:48.455: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:06:48.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-884dq" for this suite. Mar 11 11:06:54.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:06:54.507: INFO: namespace: e2e-tests-hostpath-884dq, resource: bindings, ignored listing per whitelist Mar 11 11:06:54.541: INFO: namespace e2e-tests-hostpath-884dq deletion completed in 6.082409155s • [SLOW TEST:10.284 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:06:54.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-zpdxg STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zpdxg to expose endpoints map[] Mar 11 11:06:54.676: INFO: Get endpoints failed (5.326886ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 11 11:06:55.679: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zpdxg exposes endpoints map[] (1.008540368s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zpdxg STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zpdxg to expose endpoints map[pod1:[80]] Mar 11 11:06:58.711: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zpdxg exposes endpoints map[pod1:[80]] (3.025855493s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zpdxg STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zpdxg to expose endpoints map[pod2:[80] pod1:[80]] Mar 11 11:07:00.799: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zpdxg exposes endpoints map[pod1:[80] pod2:[80]] (2.084900498s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zpdxg STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zpdxg to expose endpoints map[pod2:[80]] Mar 11 11:07:01.846: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zpdxg exposes endpoints map[pod2:[80]] (1.043286948s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zpdxg STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zpdxg to expose endpoints map[] Mar 11 11:07:02.886: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zpdxg exposes endpoints map[] (1.03667056s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:07:02.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zpdxg" for this suite. Mar 11 11:07:08.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:07:08.995: INFO: namespace: e2e-tests-services-zpdxg, resource: bindings, ignored listing per whitelist Mar 11 11:07:09.007: INFO: namespace e2e-tests-services-zpdxg deletion completed in 6.09587342s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:14.466 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:07:09.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 11 11:07:11.098: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-73638ebd-6388-11ea-bacb-0242ac11000a,GenerateName:,Namespace:e2e-tests-events-nxqxk,SelfLink:/api/v1/namespaces/e2e-tests-events-nxqxk/pods/send-events-73638ebd-6388-11ea-bacb-0242ac11000a,UID:7363e932-6388-11ea-9978-0242ac11000d,ResourceVersion:501571,Generation:0,CreationTimestamp:2020-03-11 11:07:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 82802880,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w8hxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8hxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-w8hxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001923a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001923aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:07:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:07:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:07:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:07:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.12,StartTime:2020-03-11 11:07:09 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-11 11:07:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://99555db609b4744ea5070197551e123c64c22d5f3d764cce2357d314ee2b3a09}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 11 11:07:13.103: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 11 11:07:15.107: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:07:15.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-nxqxk" for this suite. Mar 11 11:07:53.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:07:53.210: INFO: namespace: e2e-tests-events-nxqxk, resource: bindings, ignored listing per whitelist Mar 11 11:07:53.237: INFO: namespace e2e-tests-events-nxqxk deletion completed in 38.102070441s • [SLOW TEST:44.230 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:07:53.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 11 11:07:53.294: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:07:56.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cgf8w" for this suite. Mar 11 11:08:02.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:08:02.868: INFO: namespace: e2e-tests-init-container-cgf8w, resource: bindings, ignored listing per whitelist Mar 11 11:08:02.902: INFO: namespace e2e-tests-init-container-cgf8w deletion completed in 6.067794288s • [SLOW TEST:9.664 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:08:02.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:08:05.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-wclfd" for this suite. Mar 11 11:08:43.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:08:43.123: INFO: namespace: e2e-tests-kubelet-test-wclfd, resource: bindings, ignored listing per whitelist Mar 11 11:08:43.133: INFO: namespace e2e-tests-kubelet-test-wclfd deletion completed in 38.123769402s • [SLOW TEST:40.231 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:08:43.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:08:43.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-7bjpc" to be "success or failure" Mar 11 11:08:43.210: INFO: Pod "downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.453983ms Mar 11 11:08:45.214: INFO: Pod "downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013371664s STEP: Saw pod success Mar 11 11:08:45.214: INFO: Pod "downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:08:45.217: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:08:45.261: INFO: Waiting for pod downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:08:45.265: INFO: Pod downwardapi-volume-ab79cf64-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:08:45.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7bjpc" for this suite. Mar 11 11:08:51.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:08:51.350: INFO: namespace: e2e-tests-downward-api-7bjpc, resource: bindings, ignored listing per whitelist Mar 11 11:08:51.356: INFO: namespace e2e-tests-downward-api-7bjpc deletion completed in 6.088206849s • [SLOW TEST:8.223 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:08:51.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-9xlzz/configmap-test-b062762d-6388-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:08:51.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-9xlzz" to be "success or failure" Mar 11 11:08:51.445: INFO: Pod "pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.858496ms Mar 11 11:08:53.449: INFO: Pod "pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008390008s STEP: Saw pod success Mar 11 11:08:53.449: INFO: Pod "pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:08:53.451: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a container env-test: STEP: delete the pod Mar 11 11:08:53.471: INFO: Waiting for pod pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:08:53.486: INFO: Pod pod-configmaps-b0654330-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:08:53.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9xlzz" for this suite. Mar 11 11:08:59.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:08:59.550: INFO: namespace: e2e-tests-configmap-9xlzz, resource: bindings, ignored listing per whitelist Mar 11 11:08:59.568: INFO: namespace e2e-tests-configmap-9xlzz deletion completed in 6.078481901s • [SLOW TEST:8.211 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:08:59.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 11 11:09:00.163: INFO: Waiting up to 5m0s for pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84" in namespace "e2e-tests-svcaccounts-r6dn9" to be "success or failure" Mar 11 11:09:00.183: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84": Phase="Pending", Reason="", readiness=false. Elapsed: 20.325735ms Mar 11 11:09:02.185: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022496526s Mar 11 11:09:04.189: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026425616s STEP: Saw pod success Mar 11 11:09:04.189: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84" satisfied condition "success or failure" Mar 11 11:09:04.193: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84 container token-test: STEP: delete the pod Mar 11 11:09:04.228: INFO: Waiting for pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84 to disappear Mar 11 11:09:04.234: INFO: Pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-7rd84 no longer exists STEP: Creating a pod to test consume service account root CA Mar 11 11:09:04.237: INFO: Waiting up to 5m0s for pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p" in namespace "e2e-tests-svcaccounts-r6dn9" to be "success or failure" Mar 11 11:09:04.251: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p": Phase="Pending", Reason="", readiness=false. Elapsed: 13.615819ms Mar 11 11:09:06.254: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017092451s Mar 11 11:09:08.259: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022279325s STEP: Saw pod success Mar 11 11:09:08.259: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p" satisfied condition "success or failure" Mar 11 11:09:08.262: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p container root-ca-test: STEP: delete the pod Mar 11 11:09:08.311: INFO: Waiting for pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p to disappear Mar 11 11:09:08.329: INFO: Pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-cw78p no longer exists STEP: Creating a pod to test consume service account namespace Mar 11 11:09:08.332: INFO: Waiting up to 5m0s for pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6" in namespace "e2e-tests-svcaccounts-r6dn9" to be "success or failure" Mar 11 11:09:08.338: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.016126ms Mar 11 11:09:10.340: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007961518s Mar 11 11:09:12.344: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011499652s STEP: Saw pod success Mar 11 11:09:12.344: INFO: Pod "pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6" satisfied condition "success or failure" Mar 11 11:09:12.346: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6 container namespace-test: STEP: delete the pod Mar 11 11:09:12.364: INFO: Waiting for pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6 to disappear Mar 11 11:09:12.396: INFO: Pod pod-service-account-b598583a-6388-11ea-bacb-0242ac11000a-gtcn6 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:09:12.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-r6dn9" for this suite. Mar 11 11:09:18.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:09:18.495: INFO: namespace: e2e-tests-svcaccounts-r6dn9, resource: bindings, ignored listing per whitelist Mar 11 11:09:18.525: INFO: namespace e2e-tests-svcaccounts-r6dn9 deletion completed in 6.125869576s • [SLOW TEST:18.957 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:09:18.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 11 11:09:18.613: INFO: Waiting up to 5m0s for pod "downward-api-c0979270-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-g98pd" to be "success or failure" Mar 11 11:09:18.627: INFO: Pod "downward-api-c0979270-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.202351ms Mar 11 11:09:20.631: INFO: Pod "downward-api-c0979270-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018629092s Mar 11 11:09:22.658: INFO: Pod "downward-api-c0979270-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045050494s STEP: Saw pod success Mar 11 11:09:22.658: INFO: Pod "downward-api-c0979270-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:09:22.661: INFO: Trying to get logs from node hunter-worker2 pod downward-api-c0979270-6388-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:09:22.679: INFO: Waiting for pod downward-api-c0979270-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:09:22.684: INFO: Pod downward-api-c0979270-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:09:22.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g98pd" for this suite. Mar 11 11:09:28.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:09:28.780: INFO: namespace: e2e-tests-downward-api-g98pd, resource: bindings, ignored listing per whitelist Mar 11 11:09:28.783: INFO: namespace e2e-tests-downward-api-g98pd deletion completed in 6.095662099s • [SLOW TEST:10.259 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:09:28.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:09:28.863: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 11 11:09:33.868: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 11:09:33.868: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 11 11:09:33.931: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-pppmd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pppmd/deployments/test-cleanup-deployment,UID:c9b112bf-6388-11ea-9978-0242ac11000d,ResourceVersion:502097,Generation:1,CreationTimestamp:2020-03-11 11:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 11 11:09:33.943: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 11 11:09:33.943: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 11 11:09:33.943: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-pppmd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pppmd/replicasets/test-cleanup-controller,UID:c6b2ff20-6388-11ea-9978-0242ac11000d,ResourceVersion:502098,Generation:1,CreationTimestamp:2020-03-11 11:09:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c9b112bf-6388-11ea-9978-0242ac11000d 0xc002189387 0xc002189388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 11:09:33.950: INFO: Pod "test-cleanup-controller-cvsk2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-cvsk2,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-pppmd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pppmd/pods/test-cleanup-controller-cvsk2,UID:c6b4f096-6388-11ea-9978-0242ac11000d,ResourceVersion:502090,Generation:0,CreationTimestamp:2020-03-11 11:09:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c6b2ff20-6388-11ea-9978-0242ac11000d 0xc0021899e7 0xc0021899e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nslk5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nslk5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nslk5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002189a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002189a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:09:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:09:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:09:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:09:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.16,StartTime:2020-03-11 11:09:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:09:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://79d30b1479bfe557620b70e62707791d588ea7c39e8ef8e5b2b3e1da51703503}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:09:33.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pppmd" for this suite. Mar 11 11:09:40.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:09:40.074: INFO: namespace: e2e-tests-deployment-pppmd, resource: bindings, ignored listing per whitelist Mar 11 11:09:40.119: INFO: namespace e2e-tests-deployment-pppmd deletion completed in 6.150187393s • [SLOW TEST:11.336 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:09:40.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:09:40.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-tvd7d" to be "success or failure" Mar 11 11:09:40.257: INFO: Pod "downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.746702ms Mar 11 11:09:42.260: INFO: Pod "downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.045901212s STEP: Saw pod success Mar 11 11:09:42.260: INFO: Pod "downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:09:42.263: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:09:42.279: INFO: Waiting for pod downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:09:42.283: INFO: Pod downwardapi-volume-cd77c79f-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:09:42.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvd7d" for this suite. Mar 11 11:09:48.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:09:48.368: INFO: namespace: e2e-tests-downward-api-tvd7d, resource: bindings, ignored listing per whitelist Mar 11 11:09:48.390: INFO: namespace e2e-tests-downward-api-tvd7d deletion completed in 6.105369607s • [SLOW TEST:8.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:09:48.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:09:48.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 11 11:09:48.529: INFO: stderr: "" Mar 11 11:09:48.529: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 11 11:09:48.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zh2h4' Mar 11 11:09:48.769: INFO: stderr: "" Mar 11 11:09:48.769: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 11 11:09:48.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zh2h4' Mar 11 11:09:49.125: INFO: stderr: "" Mar 11 11:09:49.125: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 11:09:50.129: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:09:50.129: INFO: Found 1 / 1 Mar 11 11:09:50.129: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 11:09:50.132: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:09:50.132: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 11:09:50.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-br7vs --namespace=e2e-tests-kubectl-zh2h4' Mar 11 11:09:50.250: INFO: stderr: "" Mar 11 11:09:50.250: INFO: stdout: "Name: redis-master-br7vs\nNamespace: e2e-tests-kubectl-zh2h4\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.11\nStart Time: Wed, 11 Mar 2020 11:09:48 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.17\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://1016753af36e8c0aecfdbca194cdc663cef7f2546dd7587e9b2d9078e6e31c22\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 11 Mar 2020 11:09:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9wpgq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9wpgq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9wpgq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned e2e-tests-kubectl-zh2h4/redis-master-br7vs to hunter-worker2\n Normal Pulled 1s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Mar 11 11:09:50.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-zh2h4' Mar 11 11:09:50.366: INFO: stderr: "" Mar 11 11:09:50.367: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-zh2h4\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: redis-master-br7vs\n" Mar 11 11:09:50.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-zh2h4' Mar 11 11:09:50.445: INFO: stderr: "" Mar 11 11:09:50.445: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-zh2h4\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.147.111\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.17:6379\nSession Affinity: None\nEvents: \n" Mar 11 11:09:50.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 11 11:09:50.529: INFO: stderr: "" Mar 11 11:09:50.529: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:42:14 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 11 Mar 2020 11:09:42 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 11 Mar 2020 11:09:42 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 11 Mar 2020 11:09:42 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 11 Mar 2020 11:09:42 +0000 Sun, 08 Mar 2020 14:42:44 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.13\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 2a4329de41344349b36017b3052d3f96\n System UUID: b0983dfc-866e-4257-9f60-ab0b470ce9b2\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-4gmwj 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d20h\n kube-system coredns-54ff9cd656-jp8ll 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d20h\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d20h\n kube-system kindnet-gd8fq 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d20h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d20h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d20h\n kube-system kube-proxy-75z28 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d20h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d20h\n local-path-storage local-path-provisioner-77cfdd744c-mrm9p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d20h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 11 11:09:50.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-zh2h4' Mar 11 11:09:50.606: INFO: stderr: "" Mar 11 11:09:50.606: INFO: stdout: "Name: e2e-tests-kubectl-zh2h4\nLabels: e2e-framework=kubectl\n e2e-run=90423c28-6385-11ea-bacb-0242ac11000a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:09:50.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zh2h4" for this suite. Mar 11 11:10:12.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:10:12.687: INFO: namespace: e2e-tests-kubectl-zh2h4, resource: bindings, ignored listing per whitelist Mar 11 11:10:12.712: INFO: namespace e2e-tests-kubectl-zh2h4 deletion completed in 22.103097417s • [SLOW TEST:24.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:10:12.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 11:10:18.875: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 11:10:18.896: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 11:10:20.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 11:10:20.900: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 11:10:22.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 11:10:22.900: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:10:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kszmd" for this suite. Mar 11 11:10:44.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:10:44.950: INFO: namespace: e2e-tests-container-lifecycle-hook-kszmd, resource: bindings, ignored listing per whitelist Mar 11 11:10:44.999: INFO: namespace e2e-tests-container-lifecycle-hook-kszmd deletion completed in 22.095112619s • [SLOW TEST:32.287 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:10:44.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:10:45.095: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 11 11:10:45.115: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 11 11:10:50.119: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 11:10:50.119: INFO: Creating deployment "test-rolling-update-deployment" Mar 11 11:10:50.123: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 11 11:10:50.132: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 11 11:10:52.139: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 11 11:10:52.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719521850, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719521850, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719521850, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719521850, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:10:54.145: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 11 11:10:54.152: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-9w759,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9w759/deployments/test-rolling-update-deployment,UID:f7235428-6388-11ea-9978-0242ac11000d,ResourceVersion:502438,Generation:1,CreationTimestamp:2020-03-11 11:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-11 11:10:50 +0000 UTC 2020-03-11 11:10:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-11 11:10:53 +0000 UTC 2020-03-11 11:10:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 11 11:10:54.154: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-9w759,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9w759/replicasets/test-rolling-update-deployment-75db98fb4c,UID:f725ffb2-6388-11ea-9978-0242ac11000d,ResourceVersion:502429,Generation:1,CreationTimestamp:2020-03-11 11:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f7235428-6388-11ea-9978-0242ac11000d 0xc00216e4f7 0xc00216e4f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 11:10:54.154: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 11 11:10:54.155: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-9w759,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9w759/replicasets/test-rolling-update-controller,UID:f4249b17-6388-11ea-9978-0242ac11000d,ResourceVersion:502437,Generation:2,CreationTimestamp:2020-03-11 11:10:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f7235428-6388-11ea-9978-0242ac11000d 0xc00216e407 0xc00216e408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:10:54.157: INFO: Pod "test-rolling-update-deployment-75db98fb4c-jr86x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-jr86x,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-9w759,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9w759/pods/test-rolling-update-deployment-75db98fb4c-jr86x,UID:f7291725-6388-11ea-9978-0242ac11000d,ResourceVersion:502428,Generation:0,CreationTimestamp:2020-03-11 11:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c f725ffb2-6388-11ea-9978-0242ac11000d 0xc00216f2b7 0xc00216f2b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-swprb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-swprb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-swprb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00216f330} {node.kubernetes.io/unreachable Exists NoExecute 0xc00216f350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:10:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:10:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:10:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:10:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.20,StartTime:2020-03-11 11:10:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-11 11:10:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3bf12e2515b48fe2273a77e0c138ffc35412d9b1e64c420e3851f8f3fe9d4b03}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:10:54.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9w759" for this suite. Mar 11 11:11:00.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:11:00.249: INFO: namespace: e2e-tests-deployment-9w759, resource: bindings, ignored listing per whitelist Mar 11 11:11:00.261: INFO: namespace e2e-tests-deployment-9w759 deletion completed in 6.10035612s • [SLOW TEST:15.261 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:11:00.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-fd3b211b-6388-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:11:00.403: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-cj64x" to be "success or failure" Mar 11 11:11:00.406: INFO: Pod "pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.533826ms Mar 11 11:11:02.411: INFO: Pod "pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008240993s STEP: Saw pod success Mar 11 11:11:02.411: INFO: Pod "pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:11:02.414: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:11:02.481: INFO: Waiting for pod pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a to disappear Mar 11 11:11:02.484: INFO: Pod pod-projected-configmaps-fd3d694e-6388-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:11:02.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cj64x" for this suite. Mar 11 11:11:08.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:11:08.565: INFO: namespace: e2e-tests-projected-cj64x, resource: bindings, ignored listing per whitelist Mar 11 11:11:08.577: INFO: namespace e2e-tests-projected-cj64x deletion completed in 6.089489211s • [SLOW TEST:8.316 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:11:08.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-cmlzg Mar 11 11:11:10.712: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-cmlzg STEP: checking the pod's current state and verifying that restartCount is present Mar 11 11:11:10.714: INFO: Initial restart count of pod liveness-http is 0 Mar 11 11:11:30.797: INFO: Restart count of pod e2e-tests-container-probe-cmlzg/liveness-http is now 1 (20.082578476s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:11:30.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cmlzg" for this suite. Mar 11 11:11:36.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:11:36.859: INFO: namespace: e2e-tests-container-probe-cmlzg, resource: bindings, ignored listing per whitelist Mar 11 11:11:36.922: INFO: namespace e2e-tests-container-probe-cmlzg deletion completed in 6.107672501s • [SLOW TEST:28.346 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:11:36.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0311 11:11:38.083759 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 11:11:38.083: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:11:38.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jx572" for this suite. Mar 11 11:11:44.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:11:44.142: INFO: namespace: e2e-tests-gc-jx572, resource: bindings, ignored listing per whitelist Mar 11 11:11:44.215: INFO: namespace e2e-tests-gc-jx572 deletion completed in 6.128714367s • [SLOW TEST:7.292 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:11:44.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-176d5c76-6389-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-176d5c76-6389-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:11:48.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j857w" for this suite. Mar 11 11:12:10.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:12:10.413: INFO: namespace: e2e-tests-projected-j857w, resource: bindings, ignored listing per whitelist Mar 11 11:12:10.445: INFO: namespace e2e-tests-projected-j857w deletion completed in 22.070709935s • [SLOW TEST:26.230 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:12:10.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-270ea784-6389-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:12:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jtj4v" for this suite. Mar 11 11:12:34.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:12:34.619: INFO: namespace: e2e-tests-configmap-jtj4v, resource: bindings, ignored listing per whitelist Mar 11 11:12:34.648: INFO: namespace e2e-tests-configmap-jtj4v deletion completed in 22.062465111s • [SLOW TEST:24.203 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:12:34.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:12:34.714: INFO: Creating deployment "nginx-deployment" Mar 11 11:12:34.725: INFO: Waiting for observed generation 1 Mar 11 11:12:36.758: INFO: Waiting for all required pods to come up Mar 11 11:12:36.762: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 11 11:12:38.772: INFO: Waiting for deployment "nginx-deployment" to complete Mar 11 11:12:38.777: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 11 11:12:38.783: INFO: Updating deployment nginx-deployment Mar 11 11:12:38.783: INFO: Waiting for observed generation 2 Mar 11 11:12:40.812: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 11 11:12:40.814: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 11 11:12:40.816: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 11 11:12:40.821: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 11 11:12:40.821: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 11 11:12:40.823: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 11 11:12:40.827: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 11 11:12:40.827: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 11 11:12:40.840: INFO: Updating deployment nginx-deployment Mar 11 11:12:40.840: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 11 11:12:40.920: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 11 11:12:40.987: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 11 11:12:43.068: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9wqlq/deployments/nginx-deployment,UID:357b120a-6389-11ea-9978-0242ac11000d,ResourceVersion:503080,Generation:3,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-03-11 11:12:40 +0000 UTC 2020-03-11 11:12:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-11 11:12:41 +0000 UTC 2020-03-11 11:12:34 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 11 11:12:43.070: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9wqlq/replicasets/nginx-deployment-5c98f8fb5,UID:37e8305d-6389-11ea-9978-0242ac11000d,ResourceVersion:503073,Generation:3,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 357b120a-6389-11ea-9978-0242ac11000d 0xc0023fa957 0xc0023fa958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:12:43.070: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 11 11:12:43.070: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9wqlq/replicasets/nginx-deployment-85ddf47c5d,UID:358188fb-6389-11ea-9978-0242ac11000d,ResourceVersion:503077,Generation:3,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 357b120a-6389-11ea-9978-0242ac11000d 0xc0023faa27 0xc0023faa28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 11 11:12:43.073: INFO: Pod "nginx-deployment-5c98f8fb5-4q2fl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4q2fl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-4q2fl,UID:37ee128e-6389-11ea-9978-0242ac11000d,ResourceVersion:503005,Generation:0,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb3d7 0xc0023fb3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fb450} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fb470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:,StartTime:2020-03-11 11:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-4x6bn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4x6bn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-4x6bn,UID:393133bf-6389-11ea-9978-0242ac11000d,ResourceVersion:503060,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb530 0xc0023fb531}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fb5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fb5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-8qslg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8qslg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-8qslg,UID:39313cb4-6389-11ea-9978-0242ac11000d,ResourceVersion:503066,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb650 0xc0023fb651}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fb6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fb6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-g55rw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g55rw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-g55rw,UID:3937e59f-6389-11ea-9978-0242ac11000d,ResourceVersion:503071,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb760 0xc0023fb761}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fb7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fb800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-jtgnb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jtgnb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-jtgnb,UID:39312c49-6389-11ea-9978-0242ac11000d,ResourceVersion:503067,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb870 0xc0023fb871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fb8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fb910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-l25sw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l25sw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-l25sw,UID:37ec2eb4-6389-11ea-9978-0242ac11000d,ResourceVersion:503008,Generation:0,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fb980 0xc0023fb981}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fba00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fba20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-11 11:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-mchf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mchf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-mchf8,UID:37ff94ea-6389-11ea-9978-0242ac11000d,ResourceVersion:503096,Generation:0,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fbae0 0xc0023fbae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fbb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fbb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:,StartTime:2020-03-11 11:12:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-r952r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r952r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-r952r,UID:392e0e8c-6389-11ea-9978-0242ac11000d,ResourceVersion:503041,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fbc40 0xc0023fbc41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fbcc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fbce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-tdlff" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tdlff,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-tdlff,UID:39292e72-6389-11ea-9978-0242ac11000d,ResourceVersion:503140,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fbd50 0xc0023fbd51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fbdd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fbdf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-11 11:12:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-tvn7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tvn7m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-tvn7m,UID:39312fd9-6389-11ea-9978-0242ac11000d,ResourceVersion:503059,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fbeb0 0xc0023fbeb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023fbf30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023fbf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-vx6wt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vx6wt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-vx6wt,UID:37ee1874-6389-11ea-9978-0242ac11000d,ResourceVersion:503012,Generation:0,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0023fbfc0 0xc0023fbfc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019920b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019920f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:,StartTime:2020-03-11 11:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.074: INFO: Pod "nginx-deployment-5c98f8fb5-wwcdx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wwcdx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-wwcdx,UID:392e0e22-6389-11ea-9978-0242ac11000d,ResourceVersion:503055,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0019922c0 0xc0019922c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001992350} {node.kubernetes.io/unreachable Exists NoExecute 0xc001992370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-5c98f8fb5-xfnjm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xfnjm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-5c98f8fb5-xfnjm,UID:37fbb2c2-6389-11ea-9978-0242ac11000d,ResourceVersion:503015,Generation:0,CreationTimestamp:2020-03-11 11:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 37e8305d-6389-11ea-9978-0242ac11000d 0xc0019924e0 0xc0019924e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019925c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019925e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-11 11:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-69jtd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-69jtd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-69jtd,UID:3588ddf9-6389-11ea-9978-0242ac11000d,ResourceVersion:502950,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc001992760 0xc001992761}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001992890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019928c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.104,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fc88e9c80c3c328f5ec4ee67172246772ed41a893a3e09e7351a9795f251b315}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-6dfjw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dfjw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-6dfjw,UID:358cbb7d-6389-11ea-9978-0242ac11000d,ResourceVersion:502923,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc001992a30 0xc001992a31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001992af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001992b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.102,StartTime:2020-03-11 11:12:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0f296095cdd3c01594d5f61a1eb7996b9ec6c66f39e1c6c50c33c3083b688653}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-6lkrg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6lkrg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-6lkrg,UID:3931421e-6389-11ea-9978-0242ac11000d,ResourceVersion:503061,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc001992d00 0xc001992d01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001992df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001992e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-chs2r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-chs2r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-chs2r,UID:3588dd75-6389-11ea-9978-0242ac11000d,ResourceVersion:502937,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0019932a0 0xc0019932a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001993310} {node.kubernetes.io/unreachable Exists NoExecute 0xc001993330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.103,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58bac79b0d81ddb2deb7bb61436cf5d55901363ae8bb5364096ab1bd87b15b6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-ck45r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ck45r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-ck45r,UID:39288caf-6389-11ea-9978-0242ac11000d,ResourceVersion:503026,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0019933f0 0xc0019933f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001993e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001993e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-cxgqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cxgqd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-cxgqd,UID:39293918-6389-11ea-9978-0242ac11000d,ResourceVersion:503030,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc001993f20 0xc001993f21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0010} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-g2hdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g2hdz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-g2hdz,UID:392ee02c-6389-11ea-9978-0242ac11000d,ResourceVersion:503044,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0160 0xc0021e0161}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e01d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e01f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-gpz7x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gpz7x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-gpz7x,UID:392eda4c-6389-11ea-9978-0242ac11000d,ResourceVersion:503054,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0260 0xc0021e0261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e02d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e02f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.075: INFO: Pod "nginx-deployment-85ddf47c5d-jjtdx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jjtdx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-jjtdx,UID:358c7b2e-6389-11ea-9978-0242ac11000d,ResourceVersion:502934,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0360 0xc0021e0361}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e03d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e03f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.27,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://29d9a6d8ebacf4afb52947ba7869e598c17efa177add60ab5268820315b6d595}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-jx9nb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jx9nb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-jx9nb,UID:39314c02-6389-11ea-9978-0242ac11000d,ResourceVersion:503065,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e04b0 0xc0021e04b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-lznjj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lznjj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-lznjj,UID:35843486-6389-11ea-9978-0242ac11000d,ResourceVersion:502913,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e05b0 0xc0021e05b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.100,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e475b567bcd817e1f14d63e962fa178fe27695c070acb7b067d1df5a977a1c56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-mcpn4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mcpn4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-mcpn4,UID:39312117-6389-11ea-9978-0242ac11000d,ResourceVersion:503064,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0710 0xc0021e0711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e07a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-mnhg9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mnhg9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-mnhg9,UID:392ee06f-6389-11ea-9978-0242ac11000d,ResourceVersion:503049,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0810 0xc0021e0811}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0890} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e08b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-ntk5m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntk5m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-ntk5m,UID:3588da8a-6389-11ea-9978-0242ac11000d,ResourceVersion:502917,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0920 0xc0021e0921}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e09b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.101,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://934de5532fc4220453b8a7682c9697e39d8bdd0197a1055f6b603abe46e6e6b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-ntlpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntlpc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-ntlpc,UID:39315196-6389-11ea-9978-0242ac11000d,ResourceVersion:503063,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0a70 0xc0021e0a71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-rxl7r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rxl7r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-rxl7r,UID:392954a3-6389-11ea-9978-0242ac11000d,ResourceVersion:503118,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0b70 0xc0021e0b71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-11 11:12:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-tx8jt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tx8jt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-tx8jt,UID:392ed7a7-6389-11ea-9978-0242ac11000d,ResourceVersion:503050,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0cb0 0xc0021e0cb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-txdzv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-txdzv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-txdzv,UID:3587afb8-6389-11ea-9978-0242ac11000d,ResourceVersion:502942,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0dd0 0xc0021e0dd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.25,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f906bd26d8e74cbf2b046e75639405da55982a7087f7a49ea59cbf09a1e6a54e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.076: INFO: Pod "nginx-deployment-85ddf47c5d-vcbmg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vcbmg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-vcbmg,UID:3587acec-6389-11ea-9978-0242ac11000d,ResourceVersion:502911,Generation:0,CreationTimestamp:2020-03-11 11:12:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e0f30 0xc0021e0f31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e0fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e0fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.24,StartTime:2020-03-11 11:12:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 11:12:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7ce78fd7b61da116647a482b6b1eba90706b27d043aaae0f77da3c2701d77ddd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 11:12:43.077: INFO: Pod "nginx-deployment-85ddf47c5d-whqvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-whqvn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9wqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9wqlq/pods/nginx-deployment-85ddf47c5d-whqvn,UID:39314929-6389-11ea-9978-0242ac11000d,ResourceVersion:503062,Generation:0,CreationTimestamp:2020-03-11 11:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 358188fb-6389-11ea-9978-0242ac11000d 0xc0021e1090 0xc0021e1091}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r6tnt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r6tnt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r6tnt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021e1120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021e1190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:12:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:12:43.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9wqlq" for this suite. Mar 11 11:12:53.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:12:53.168: INFO: namespace: e2e-tests-deployment-9wqlq, resource: bindings, ignored listing per whitelist Mar 11 11:12:53.179: INFO: namespace e2e-tests-deployment-9wqlq deletion completed in 10.097922022s • [SLOW TEST:18.531 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:12:53.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 11 11:13:05.316: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:05.355: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:07.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:07.359: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:09.356: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:09.392: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:11.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:11.358: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:13.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:13.359: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:15.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:15.360: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:17.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:17.359: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 11:13:19.355: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 11:13:19.360: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:13:19.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tqxn5" for this suite. Mar 11 11:13:41.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:13:41.453: INFO: namespace: e2e-tests-container-lifecycle-hook-tqxn5, resource: bindings, ignored listing per whitelist Mar 11 11:13:41.513: INFO: namespace e2e-tests-container-lifecycle-hook-tqxn5 deletion completed in 22.141589393s • [SLOW TEST:48.333 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:13:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 11:13:51.722084 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 11:13:51.722: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:13:51.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-r2rtk" for this suite. Mar 11 11:13:57.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:13:57.805: INFO: namespace: e2e-tests-gc-r2rtk, resource: bindings, ignored listing per whitelist Mar 11 11:13:57.811: INFO: namespace e2e-tests-gc-r2rtk deletion completed in 6.086329171s • [SLOW TEST:16.298 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:13:57.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 11 11:13:57.891: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 11:13:57.898: INFO: Waiting for terminating namespaces to be deleted... Mar 11 11:13:57.899: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 11 11:13:57.902: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 11:13:57.902: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 11:13:57.902: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 11:13:57.902: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 11:13:57.902: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 11 11:13:57.905: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 11:13:57.905: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 11:13:57.905: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 11:13:57.905: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 11 11:13:57.958: INFO: Pod kindnet-jjqmp requesting resource cpu=100m on Node hunter-worker Mar 11 11:13:57.958: INFO: Pod kindnet-nwqfj requesting resource cpu=100m on Node hunter-worker2 Mar 11 11:13:57.958: INFO: Pod kube-proxy-chv9d requesting resource cpu=0m on Node hunter-worker2 Mar 11 11:13:57.958: INFO: Pod kube-proxy-h66sh requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-67190389-6389-11ea-bacb-0242ac11000a.15fb3b9ca83bba42], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-l5fjs/filler-pod-67190389-6389-11ea-bacb-0242ac11000a to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-67190389-6389-11ea-bacb-0242ac11000a.15fb3b9ce29dec74], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-67190389-6389-11ea-bacb-0242ac11000a.15fb3b9cf1c8f521], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-67190389-6389-11ea-bacb-0242ac11000a.15fb3b9cfe5dddb4], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6719a706-6389-11ea-bacb-0242ac11000a.15fb3b9cabbb7260], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-l5fjs/filler-pod-6719a706-6389-11ea-bacb-0242ac11000a to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6719a706-6389-11ea-bacb-0242ac11000a.15fb3b9ce89f611b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6719a706-6389-11ea-bacb-0242ac11000a.15fb3b9cf39506a8], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6719a706-6389-11ea-bacb-0242ac11000a.15fb3b9d0197ab65], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fb3b9d23f1bef5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:14:01.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-l5fjs" for this suite. Mar 11 11:14:07.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:14:07.177: INFO: namespace: e2e-tests-sched-pred-l5fjs, resource: bindings, ignored listing per whitelist Mar 11 11:14:07.223: INFO: namespace e2e-tests-sched-pred-l5fjs deletion completed in 6.086222461s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:14:07.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:14:07.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-8jdx6" to be "success or failure" Mar 11 11:14:07.355: INFO: Pod "downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.264112ms Mar 11 11:14:09.359: INFO: Pod "downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031011392s STEP: Saw pod success Mar 11 11:14:09.359: INFO: Pod "downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:14:09.362: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:14:09.397: INFO: Waiting for pod downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:14:09.399: INFO: Pod downwardapi-volume-6caddfb7-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:14:09.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8jdx6" for this suite. Mar 11 11:14:15.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:14:15.460: INFO: namespace: e2e-tests-projected-8jdx6, resource: bindings, ignored listing per whitelist Mar 11 11:14:15.478: INFO: namespace e2e-tests-projected-8jdx6 deletion completed in 6.075822897s • [SLOW TEST:8.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:14:15.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 11 11:14:15.571: INFO: Waiting up to 5m0s for pod "pod-7197e522-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-mxdqj" to be "success or failure" Mar 11 11:14:15.579: INFO: Pod "pod-7197e522-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.742948ms Mar 11 11:14:17.583: INFO: Pod "pod-7197e522-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011344838s STEP: Saw pod success Mar 11 11:14:17.583: INFO: Pod "pod-7197e522-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:14:17.585: INFO: Trying to get logs from node hunter-worker pod pod-7197e522-6389-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:14:17.639: INFO: Waiting for pod pod-7197e522-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:14:17.645: INFO: Pod pod-7197e522-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:14:17.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mxdqj" for this suite. Mar 11 11:14:23.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:14:23.738: INFO: namespace: e2e-tests-emptydir-mxdqj, resource: bindings, ignored listing per whitelist Mar 11 11:14:23.823: INFO: namespace e2e-tests-emptydir-mxdqj deletion completed in 6.174827388s • [SLOW TEST:8.344 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:14:23.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 11 11:14:23.921: INFO: Waiting up to 5m0s for pod "pod-768d4bd9-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-2457g" to be "success or failure" Mar 11 11:14:23.927: INFO: Pod "pod-768d4bd9-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.947377ms Mar 11 11:14:25.930: INFO: Pod "pod-768d4bd9-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009803572s STEP: Saw pod success Mar 11 11:14:25.930: INFO: Pod "pod-768d4bd9-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:14:25.933: INFO: Trying to get logs from node hunter-worker pod pod-768d4bd9-6389-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:14:25.951: INFO: Waiting for pod pod-768d4bd9-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:14:25.956: INFO: Pod pod-768d4bd9-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:14:25.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2457g" for this suite. Mar 11 11:14:31.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:14:32.007: INFO: namespace: e2e-tests-emptydir-2457g, resource: bindings, ignored listing per whitelist Mar 11 11:14:32.028: INFO: namespace e2e-tests-emptydir-2457g deletion completed in 6.069244964s • [SLOW TEST:8.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:14:32.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:14:32.100: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 11 11:14:37.103: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 11:14:37.103: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 11 11:14:39.107: INFO: Creating deployment "test-rollover-deployment" Mar 11 11:14:39.131: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 11 11:14:41.137: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 11 11:14:41.144: INFO: Ensure that both replica sets have 1 created replica Mar 11 11:14:41.150: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 11 11:14:41.155: INFO: Updating deployment test-rollover-deployment Mar 11 11:14:41.156: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 11 11:14:43.170: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 11 11:14:43.175: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 11 11:14:43.179: INFO: all replica sets need to contain the pod-template-hash label Mar 11 11:14:43.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522082, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:14:45.186: INFO: all replica sets need to contain the pod-template-hash label Mar 11 11:14:45.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522082, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:14:47.186: INFO: all replica sets need to contain the pod-template-hash label Mar 11 11:14:47.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522082, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:14:49.186: INFO: all replica sets need to contain the pod-template-hash label Mar 11 11:14:49.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522082, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:14:51.186: INFO: all replica sets need to contain the pod-template-hash label Mar 11 11:14:51.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522082, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719522079, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 11:14:53.185: INFO: Mar 11 11:14:53.185: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 11 11:14:53.192: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-59kh2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59kh2/deployments/test-rollover-deployment,UID:7fa01745-6389-11ea-9978-0242ac11000d,ResourceVersion:504068,Generation:2,CreationTimestamp:2020-03-11 11:14:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-11 11:14:39 +0000 UTC 2020-03-11 11:14:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-11 11:14:52 +0000 UTC 2020-03-11 11:14:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 11 11:14:53.194: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-59kh2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59kh2/replicasets/test-rollover-deployment-5b8479fdb6,UID:80d8ae3f-6389-11ea-9978-0242ac11000d,ResourceVersion:504059,Generation:2,CreationTimestamp:2020-03-11 11:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7fa01745-6389-11ea-9978-0242ac11000d 0xc00107aad7 0xc00107aad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 11:14:53.194: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 11 11:14:53.195: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-59kh2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59kh2/replicasets/test-rollover-controller,UID:7b711787-6389-11ea-9978-0242ac11000d,ResourceVersion:504067,Generation:2,CreationTimestamp:2020-03-11 11:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7fa01745-6389-11ea-9978-0242ac11000d 0xc001923f97 0xc001923f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:14:53.195: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-59kh2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-59kh2/replicasets/test-rollover-deployment-58494b7559,UID:7fa521d7-6389-11ea-9978-0242ac11000d,ResourceVersion:504027,Generation:2,CreationTimestamp:2020-03-11 11:14:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7fa01745-6389-11ea-9978-0242ac11000d 0xc00107a057 0xc00107a058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:14:53.197: INFO: Pod "test-rollover-deployment-5b8479fdb6-xc7j6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-xc7j6,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-59kh2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-59kh2/pods/test-rollover-deployment-5b8479fdb6-xc7j6,UID:80e655c2-6389-11ea-9978-0242ac11000d,ResourceVersion:504037,Generation:0,CreationTimestamp:2020-03-11 11:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 80d8ae3f-6389-11ea-9978-0242ac11000d 0xc001ebfd47 0xc001ebfd48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bk9df {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bk9df,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bk9df true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ebfe40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ebfe60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:14:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:14:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:14:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:14:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.128,StartTime:2020-03-11 11:14:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-11 11:14:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://999cc84226469fdf9c0d35733c96fcffc67ed7169a2f8e9712b38a1eb89f8bfc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:14:53.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-59kh2" for this suite. Mar 11 11:14:59.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:14:59.226: INFO: namespace: e2e-tests-deployment-59kh2, resource: bindings, ignored listing per whitelist Mar 11 11:14:59.287: INFO: namespace e2e-tests-deployment-59kh2 deletion completed in 6.086223062s • [SLOW TEST:27.259 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:14:59.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 11:15:03.418: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:03.425: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:05.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:05.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:07.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:07.430: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:09.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:09.430: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:11.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:11.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:13.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:13.430: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:15.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:15.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:17.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:17.430: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:19.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:19.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:21.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:21.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:23.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:23.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:25.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:25.428: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:27.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:27.429: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 11:15:29.425: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 11:15:29.429: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:15:29.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hmjq5" for this suite. Mar 11 11:15:51.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:15:51.456: INFO: namespace: e2e-tests-container-lifecycle-hook-hmjq5, resource: bindings, ignored listing per whitelist Mar 11 11:15:51.511: INFO: namespace e2e-tests-container-lifecycle-hook-hmjq5 deletion completed in 22.078120482s • [SLOW TEST:52.224 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:15:51.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:15:51.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-vkzpd" to be "success or failure" Mar 11 11:15:51.588: INFO: Pod "downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882134ms Mar 11 11:15:53.592: INFO: Pod "downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007696762s Mar 11 11:15:55.596: INFO: Pod "downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011809822s STEP: Saw pod success Mar 11 11:15:55.596: INFO: Pod "downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:15:55.598: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:15:55.635: INFO: Waiting for pod downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:15:55.642: INFO: Pod downwardapi-volume-aad240d5-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:15:55.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vkzpd" for this suite. Mar 11 11:16:01.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:01.724: INFO: namespace: e2e-tests-downward-api-vkzpd, resource: bindings, ignored listing per whitelist Mar 11 11:16:01.731: INFO: namespace e2e-tests-downward-api-vkzpd deletion completed in 6.085951539s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:01.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-g2gtk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g2gtk to expose endpoints map[] Mar 11 11:16:01.847: INFO: Get endpoints failed (2.248199ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 11 11:16:02.850: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g2gtk exposes endpoints map[] (1.005803259s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-g2gtk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g2gtk to expose endpoints map[pod1:[100]] Mar 11 11:16:04.923: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g2gtk exposes endpoints map[pod1:[100]] (2.067530012s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-g2gtk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g2gtk to expose endpoints map[pod2:[101] pod1:[100]] Mar 11 11:16:06.982: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g2gtk exposes endpoints map[pod1:[100] pod2:[101]] (2.055025226s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-g2gtk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g2gtk to expose endpoints map[pod2:[101]] Mar 11 11:16:07.004: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g2gtk exposes endpoints map[pod2:[101]] (16.88491ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-g2gtk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-g2gtk to expose endpoints map[] Mar 11 11:16:08.044: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-g2gtk exposes endpoints map[] (1.035768928s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:08.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-g2gtk" for this suite. Mar 11 11:16:14.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:14.115: INFO: namespace: e2e-tests-services-g2gtk, resource: bindings, ignored listing per whitelist Mar 11 11:16:14.171: INFO: namespace e2e-tests-services-g2gtk deletion completed in 6.084896374s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:12.440 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:14.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:16:14.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-pxbqj" to be "success or failure" Mar 11 11:16:14.276: INFO: Pod "downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280882ms Mar 11 11:16:16.279: INFO: Pod "downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006896942s STEP: Saw pod success Mar 11 11:16:16.279: INFO: Pod "downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:16:16.280: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:16:16.332: INFO: Waiting for pod downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:16:16.333: INFO: Pod downwardapi-volume-b8577ebf-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:16.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pxbqj" for this suite. Mar 11 11:16:22.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:22.393: INFO: namespace: e2e-tests-projected-pxbqj, resource: bindings, ignored listing per whitelist Mar 11 11:16:22.423: INFO: namespace e2e-tests-projected-pxbqj deletion completed in 6.087370612s • [SLOW TEST:8.252 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:22.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 11 11:16:22.527: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fqkgd,SelfLink:/api/v1/namespaces/e2e-tests-watch-fqkgd/configmaps/e2e-watch-test-watch-closed,UID:bd417c0d-6389-11ea-9978-0242ac11000d,ResourceVersion:504424,Generation:0,CreationTimestamp:2020-03-11 11:16:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 11:16:22.527: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fqkgd,SelfLink:/api/v1/namespaces/e2e-tests-watch-fqkgd/configmaps/e2e-watch-test-watch-closed,UID:bd417c0d-6389-11ea-9978-0242ac11000d,ResourceVersion:504425,Generation:0,CreationTimestamp:2020-03-11 11:16:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 11 11:16:22.550: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fqkgd,SelfLink:/api/v1/namespaces/e2e-tests-watch-fqkgd/configmaps/e2e-watch-test-watch-closed,UID:bd417c0d-6389-11ea-9978-0242ac11000d,ResourceVersion:504426,Generation:0,CreationTimestamp:2020-03-11 11:16:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 11:16:22.551: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fqkgd,SelfLink:/api/v1/namespaces/e2e-tests-watch-fqkgd/configmaps/e2e-watch-test-watch-closed,UID:bd417c0d-6389-11ea-9978-0242ac11000d,ResourceVersion:504427,Generation:0,CreationTimestamp:2020-03-11 11:16:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fqkgd" for this suite. Mar 11 11:16:28.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:28.602: INFO: namespace: e2e-tests-watch-fqkgd, resource: bindings, ignored listing per whitelist Mar 11 11:16:28.667: INFO: namespace e2e-tests-watch-fqkgd deletion completed in 6.092885181s • [SLOW TEST:6.244 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:28.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 11 11:16:28.778: INFO: Waiting up to 5m0s for pod "client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-containers-nlcjm" to be "success or failure" Mar 11 11:16:28.782: INFO: Pod "client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706158ms Mar 11 11:16:30.785: INFO: Pod "client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007663921s STEP: Saw pod success Mar 11 11:16:30.785: INFO: Pod "client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:16:30.788: INFO: Trying to get logs from node hunter-worker pod client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:16:30.805: INFO: Waiting for pod client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:16:30.831: INFO: Pod client-containers-c0fce54b-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:30.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nlcjm" for this suite. Mar 11 11:16:36.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:36.909: INFO: namespace: e2e-tests-containers-nlcjm, resource: bindings, ignored listing per whitelist Mar 11 11:16:36.918: INFO: namespace e2e-tests-containers-nlcjm deletion completed in 6.082821598s • [SLOW TEST:8.251 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:36.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c5e4f026-6389-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:16:37.065: INFO: Waiting up to 5m0s for pod "pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-brjdr" to be "success or failure" Mar 11 11:16:37.073: INFO: Pod "pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.797301ms Mar 11 11:16:39.076: INFO: Pod "pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01130308s STEP: Saw pod success Mar 11 11:16:39.076: INFO: Pod "pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:16:39.079: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 11:16:39.105: INFO: Waiting for pod pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:16:39.124: INFO: Pod pod-configmaps-c5eddb53-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:39.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-brjdr" for this suite. Mar 11 11:16:45.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:45.167: INFO: namespace: e2e-tests-configmap-brjdr, resource: bindings, ignored listing per whitelist Mar 11 11:16:45.217: INFO: namespace e2e-tests-configmap-brjdr deletion completed in 6.089883855s • [SLOW TEST:8.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:45.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-cad975f5-6389-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:16:45.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-zjc8r" to be "success or failure" Mar 11 11:16:45.333: INFO: Pod "pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807512ms Mar 11 11:16:47.336: INFO: Pod "pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006772433s STEP: Saw pod success Mar 11 11:16:47.336: INFO: Pod "pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:16:47.338: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:16:47.352: INFO: Waiting for pod pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a to disappear Mar 11 11:16:47.357: INFO: Pod pod-projected-configmaps-cadaa126-6389-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:16:47.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zjc8r" for this suite. Mar 11 11:16:53.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:16:53.406: INFO: namespace: e2e-tests-projected-zjc8r, resource: bindings, ignored listing per whitelist Mar 11 11:16:53.445: INFO: namespace e2e-tests-projected-zjc8r deletion completed in 6.086533788s • [SLOW TEST:8.228 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:16:53.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cfbda2a8-6389-11ea-bacb-0242ac11000a STEP: Creating configMap with name cm-test-opt-upd-cfbda304-6389-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cfbda2a8-6389-11ea-bacb-0242ac11000a STEP: Updating configmap cm-test-opt-upd-cfbda304-6389-11ea-bacb-0242ac11000a STEP: Creating configMap with name cm-test-opt-create-cfbda322-6389-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:18:03.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7rbtr" for this suite. Mar 11 11:18:25.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:18:25.925: INFO: namespace: e2e-tests-projected-7rbtr, resource: bindings, ignored listing per whitelist Mar 11 11:18:25.983: INFO: namespace e2e-tests-projected-7rbtr deletion completed in 22.092512362s • [SLOW TEST:92.538 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:18:25.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-06e94be3-638a-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:18:26.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-r7mqq" to be "success or failure" Mar 11 11:18:26.099: INFO: Pod "pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661599ms Mar 11 11:18:28.103: INFO: Pod "pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00791138s STEP: Saw pod success Mar 11 11:18:28.103: INFO: Pod "pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:18:28.105: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 11:18:28.124: INFO: Waiting for pod pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a to disappear Mar 11 11:18:28.162: INFO: Pod pod-configmaps-06ea3f2d-638a-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:18:28.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-r7mqq" for this suite. Mar 11 11:18:34.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:18:34.198: INFO: namespace: e2e-tests-configmap-r7mqq, resource: bindings, ignored listing per whitelist Mar 11 11:18:34.228: INFO: namespace e2e-tests-configmap-r7mqq deletion completed in 6.062761533s • [SLOW TEST:8.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:18:34.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:18:34.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-64mjj' Mar 11 11:18:36.197: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 11:18:36.197: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 11 11:18:36.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-64mjj' Mar 11 11:18:36.311: INFO: stderr: "" Mar 11 11:18:36.311: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:18:36.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-64mjj" for this suite. Mar 11 11:18:58.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:18:58.357: INFO: namespace: e2e-tests-kubectl-64mjj, resource: bindings, ignored listing per whitelist Mar 11 11:18:58.402: INFO: namespace e2e-tests-kubectl-64mjj deletion completed in 22.08815321s • [SLOW TEST:24.174 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:18:58.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:18:58.479: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 11 11:18:58.487: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:18:58.489: INFO: Number of nodes with available pods: 0 Mar 11 11:18:58.489: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:18:59.496: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:18:59.501: INFO: Number of nodes with available pods: 0 Mar 11 11:18:59.501: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:19:00.492: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:00.494: INFO: Number of nodes with available pods: 0 Mar 11 11:19:00.494: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:19:01.493: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:01.497: INFO: Number of nodes with available pods: 2 Mar 11 11:19:01.497: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 11 11:19:01.533: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:01.533: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:01.538: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:02.542: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:02.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:02.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:03.551: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:03.552: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:03.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:04.542: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:04.542: INFO: Pod daemon-set-zds88 is not available Mar 11 11:19:04.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:04.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:05.541: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:05.541: INFO: Pod daemon-set-zds88 is not available Mar 11 11:19:05.541: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:05.544: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:06.542: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:06.542: INFO: Pod daemon-set-zds88 is not available Mar 11 11:19:06.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:06.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:07.542: INFO: Wrong image for pod: daemon-set-zds88. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:07.542: INFO: Pod daemon-set-zds88 is not available Mar 11 11:19:07.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:07.546: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:08.542: INFO: Pod daemon-set-fgd2n is not available Mar 11 11:19:08.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:08.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:09.544: INFO: Pod daemon-set-fgd2n is not available Mar 11 11:19:09.544: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:09.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:10.542: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:10.545: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:11.541: INFO: Wrong image for pod: daemon-set-zfcw8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 11:19:11.541: INFO: Pod daemon-set-zfcw8 is not available Mar 11 11:19:11.576: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:12.543: INFO: Pod daemon-set-bv89m is not available Mar 11 11:19:12.547: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 11 11:19:12.551: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:12.553: INFO: Number of nodes with available pods: 1 Mar 11 11:19:12.553: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:19:13.559: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:13.562: INFO: Number of nodes with available pods: 1 Mar 11 11:19:13.562: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:19:14.557: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:19:14.560: INFO: Number of nodes with available pods: 2 Mar 11 11:19:14.560: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bz95j, will wait for the garbage collector to delete the pods Mar 11 11:19:14.628: INFO: Deleting DaemonSet.extensions daemon-set took: 4.860234ms Mar 11 11:19:14.729: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.241883ms Mar 11 11:19:28.144: INFO: Number of nodes with available pods: 0 Mar 11 11:19:28.144: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 11:19:28.147: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bz95j/daemonsets","resourceVersion":"505039"},"items":null} Mar 11 11:19:28.149: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bz95j/pods","resourceVersion":"505039"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:19:28.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bz95j" for this suite. Mar 11 11:19:34.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:19:34.203: INFO: namespace: e2e-tests-daemonsets-bz95j, resource: bindings, ignored listing per whitelist Mar 11 11:19:34.232: INFO: namespace e2e-tests-daemonsets-bz95j deletion completed in 6.072912267s • [SLOW TEST:35.830 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:19:34.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:19:34.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-msrwk' Mar 11 11:19:34.428: INFO: stderr: "" Mar 11 11:19:34.428: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 11 11:19:39.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-msrwk -o json' Mar 11 11:19:39.588: INFO: stderr: "" Mar 11 11:19:39.588: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-11T11:19:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-msrwk\",\n \"resourceVersion\": \"505099\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-msrwk/pods/e2e-test-nginx-pod\",\n \"uid\": \"2fa4dc03-638a-11ea-9978-0242ac11000d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lcccn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lcccn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lcccn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T11:19:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T11:19:36Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T11:19:36Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T11:19:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://556ed5d6fa6f3c43462ccd70ded65d27ee659671dd8b98e71d4fa8ade125e84d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-11T11:19:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.11\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.56\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-11T11:19:34Z\"\n }\n}\n" STEP: replace the image in the pod Mar 11 11:19:39.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-msrwk' Mar 11 11:19:39.874: INFO: stderr: "" Mar 11 11:19:39.874: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 11 11:19:39.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-msrwk' Mar 11 11:19:42.067: INFO: stderr: "" Mar 11 11:19:42.067: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:19:42.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-msrwk" for this suite. Mar 11 11:19:48.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:19:48.133: INFO: namespace: e2e-tests-kubectl-msrwk, resource: bindings, ignored listing per whitelist Mar 11 11:19:48.179: INFO: namespace e2e-tests-kubectl-msrwk deletion completed in 6.081734902s • [SLOW TEST:13.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:19:48.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-37e5b040-638a-11ea-bacb-0242ac11000a STEP: Creating secret with name secret-projected-all-test-volume-37e5b01c-638a-11ea-bacb-0242ac11000a STEP: Creating a pod to test Check all projections for projected volume plugin Mar 11 11:19:48.279: INFO: Waiting up to 5m0s for pod "projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-6ztsf" to be "success or failure" Mar 11 11:19:48.298: INFO: Pod "projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.660548ms Mar 11 11:19:50.307: INFO: Pod "projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027261745s STEP: Saw pod success Mar 11 11:19:50.307: INFO: Pod "projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:19:50.309: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a container projected-all-volume-test: STEP: delete the pod Mar 11 11:19:50.327: INFO: Waiting for pod projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a to disappear Mar 11 11:19:50.332: INFO: Pod projected-volume-37e5afc2-638a-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:19:50.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6ztsf" for this suite. Mar 11 11:19:56.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:19:56.398: INFO: namespace: e2e-tests-projected-6ztsf, resource: bindings, ignored listing per whitelist Mar 11 11:19:56.403: INFO: namespace e2e-tests-projected-6ztsf deletion completed in 6.067271898s • [SLOW TEST:8.223 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:19:56.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0311 11:20:36.503202 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 11:20:36.503: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:20:36.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mrrst" for this suite. Mar 11 11:20:44.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:20:44.557: INFO: namespace: e2e-tests-gc-mrrst, resource: bindings, ignored listing per whitelist Mar 11 11:20:44.582: INFO: namespace e2e-tests-gc-mrrst deletion completed in 8.077148748s • [SLOW TEST:48.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:20:44.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 11 11:20:44.660: INFO: Waiting up to 5m0s for pod "var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a" in namespace "e2e-tests-var-expansion-lh5ts" to be "success or failure" Mar 11 11:20:44.664: INFO: Pod "var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43574ms Mar 11 11:20:46.668: INFO: Pod "var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008720263s STEP: Saw pod success Mar 11 11:20:46.668: INFO: Pod "var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:20:46.671: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:20:46.706: INFO: Waiting for pod var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a to disappear Mar 11 11:20:46.732: INFO: Pod var-expansion-5981c99c-638a-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:20:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-lh5ts" for this suite. Mar 11 11:20:52.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:20:52.777: INFO: namespace: e2e-tests-var-expansion-lh5ts, resource: bindings, ignored listing per whitelist Mar 11 11:20:52.870: INFO: namespace e2e-tests-var-expansion-lh5ts deletion completed in 6.13453982s • [SLOW TEST:8.288 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:20:52.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 11 11:20:52.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:52.992: INFO: Number of nodes with available pods: 0 Mar 11 11:20:52.992: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:20:53.996: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:53.998: INFO: Number of nodes with available pods: 0 Mar 11 11:20:53.998: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:20:54.997: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:55.000: INFO: Number of nodes with available pods: 0 Mar 11 11:20:55.000: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:20:55.996: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:55.999: INFO: Number of nodes with available pods: 2 Mar 11 11:20:55.999: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 11 11:20:56.033: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:56.058: INFO: Number of nodes with available pods: 1 Mar 11 11:20:56.058: INFO: Node hunter-worker2 is running more than one daemon pod Mar 11 11:20:57.062: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:57.066: INFO: Number of nodes with available pods: 1 Mar 11 11:20:57.066: INFO: Node hunter-worker2 is running more than one daemon pod Mar 11 11:20:58.061: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:58.063: INFO: Number of nodes with available pods: 1 Mar 11 11:20:58.063: INFO: Node hunter-worker2 is running more than one daemon pod Mar 11 11:20:59.063: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 11:20:59.067: INFO: Number of nodes with available pods: 2 Mar 11 11:20:59.067: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-sbgrz, will wait for the garbage collector to delete the pods Mar 11 11:20:59.133: INFO: Deleting DaemonSet.extensions daemon-set took: 7.591447ms Mar 11 11:20:59.233: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.174894ms Mar 11 11:21:07.960: INFO: Number of nodes with available pods: 0 Mar 11 11:21:07.960: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 11:21:07.962: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-sbgrz/daemonsets","resourceVersion":"505604"},"items":null} Mar 11 11:21:07.964: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-sbgrz/pods","resourceVersion":"505604"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:21:07.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-sbgrz" for this suite. Mar 11 11:21:13.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:21:14.028: INFO: namespace: e2e-tests-daemonsets-sbgrz, resource: bindings, ignored listing per whitelist Mar 11 11:21:14.035: INFO: namespace e2e-tests-daemonsets-sbgrz deletion completed in 6.060557046s • [SLOW TEST:21.165 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:21:14.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:21:32.196: INFO: Container started at 2020-03-11 11:21:15 +0000 UTC, pod became ready at 2020-03-11 11:21:31 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:21:32.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-z7cms" for this suite. Mar 11 11:21:54.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:21:54.264: INFO: namespace: e2e-tests-container-probe-z7cms, resource: bindings, ignored listing per whitelist Mar 11 11:21:54.314: INFO: namespace e2e-tests-container-probe-z7cms deletion completed in 22.113358047s • [SLOW TEST:40.278 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:21:54.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 11 11:21:56.948: INFO: Successfully updated pod "annotationupdate831201a8-638a-11ea-bacb-0242ac11000a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:21:58.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c78jz" for this suite. Mar 11 11:22:20.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:22:21.006: INFO: namespace: e2e-tests-downward-api-c78jz, resource: bindings, ignored listing per whitelist Mar 11 11:22:21.056: INFO: namespace e2e-tests-downward-api-c78jz deletion completed in 22.087632029s • [SLOW TEST:26.742 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:22:21.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:22:21.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-m7sj4" to be "success or failure" Mar 11 11:22:21.181: INFO: Pod "downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.970362ms Mar 11 11:22:23.185: INFO: Pod "downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013937411s Mar 11 11:22:25.190: INFO: Pod "downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018206912s STEP: Saw pod success Mar 11 11:22:25.190: INFO: Pod "downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:22:25.193: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:22:25.904: INFO: Waiting for pod downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a to disappear Mar 11 11:22:25.912: INFO: Pod downwardapi-volume-930885da-638a-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:22:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-m7sj4" for this suite. Mar 11 11:22:31.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:22:32.029: INFO: namespace: e2e-tests-downward-api-m7sj4, resource: bindings, ignored listing per whitelist Mar 11 11:22:32.065: INFO: namespace e2e-tests-downward-api-m7sj4 deletion completed in 6.146111962s • [SLOW TEST:11.009 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:22:32.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-99905598-638a-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:22:32.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-qjxzz" to be "success or failure" Mar 11 11:22:32.177: INFO: Pod "pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.591673ms Mar 11 11:22:34.180: INFO: Pod "pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033447914s Mar 11 11:22:36.184: INFO: Pod "pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037324136s STEP: Saw pod success Mar 11 11:22:36.184: INFO: Pod "pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:22:36.187: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:22:36.221: INFO: Waiting for pod pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a to disappear Mar 11 11:22:36.234: INFO: Pod pod-projected-configmaps-999111d5-638a-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:22:36.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qjxzz" for this suite. Mar 11 11:22:42.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:22:42.258: INFO: namespace: e2e-tests-projected-qjxzz, resource: bindings, ignored listing per whitelist Mar 11 11:22:42.292: INFO: namespace e2e-tests-projected-qjxzz deletion completed in 6.054774908s • [SLOW TEST:10.227 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:22:42.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 11 11:22:50.425: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.425: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.449353 6 log.go:172] (0xc0020142c0) (0xc00084d040) Create stream I0311 11:22:50.449377 6 log.go:172] (0xc0020142c0) (0xc00084d040) Stream added, broadcasting: 1 I0311 11:22:50.451147 6 log.go:172] (0xc0020142c0) Reply frame received for 1 I0311 11:22:50.451178 6 log.go:172] (0xc0020142c0) (0xc0011a3a40) Create stream I0311 11:22:50.451186 6 log.go:172] (0xc0020142c0) (0xc0011a3a40) Stream added, broadcasting: 3 I0311 11:22:50.451877 6 log.go:172] (0xc0020142c0) Reply frame received for 3 I0311 11:22:50.451900 6 log.go:172] (0xc0020142c0) (0xc00084d0e0) Create stream I0311 11:22:50.451908 6 log.go:172] (0xc0020142c0) (0xc00084d0e0) Stream added, broadcasting: 5 I0311 11:22:50.452547 6 log.go:172] (0xc0020142c0) Reply frame received for 5 I0311 11:22:50.505080 6 log.go:172] (0xc0020142c0) Data frame received for 5 I0311 11:22:50.505109 6 log.go:172] (0xc00084d0e0) (5) Data frame handling I0311 11:22:50.505127 6 log.go:172] (0xc0020142c0) Data frame received for 3 I0311 11:22:50.505139 6 log.go:172] (0xc0011a3a40) (3) Data frame handling I0311 11:22:50.505149 6 log.go:172] (0xc0011a3a40) (3) Data frame sent I0311 11:22:50.505155 6 log.go:172] (0xc0020142c0) Data frame received for 3 I0311 11:22:50.505159 6 log.go:172] (0xc0011a3a40) (3) Data frame handling I0311 11:22:50.506454 6 log.go:172] (0xc0020142c0) Data frame received for 1 I0311 11:22:50.506472 6 log.go:172] (0xc00084d040) (1) Data frame handling I0311 11:22:50.506484 6 log.go:172] (0xc00084d040) (1) Data frame sent I0311 11:22:50.506496 6 log.go:172] (0xc0020142c0) (0xc00084d040) Stream removed, broadcasting: 1 I0311 11:22:50.506513 6 log.go:172] (0xc0020142c0) Go away received I0311 11:22:50.506560 6 log.go:172] (0xc0020142c0) (0xc00084d040) Stream removed, broadcasting: 1 I0311 11:22:50.506570 6 log.go:172] (0xc0020142c0) (0xc0011a3a40) Stream removed, broadcasting: 3 I0311 11:22:50.506581 6 log.go:172] (0xc0020142c0) (0xc00084d0e0) Stream removed, broadcasting: 5 Mar 11 11:22:50.506: INFO: Exec stderr: "" Mar 11 11:22:50.506: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.506: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.528013 6 log.go:172] (0xc0026422c0) (0xc0012f2280) Create stream I0311 11:22:50.528033 6 log.go:172] (0xc0026422c0) (0xc0012f2280) Stream added, broadcasting: 1 I0311 11:22:50.530461 6 log.go:172] (0xc0026422c0) Reply frame received for 1 I0311 11:22:50.530494 6 log.go:172] (0xc0026422c0) (0xc001a00780) Create stream I0311 11:22:50.530503 6 log.go:172] (0xc0026422c0) (0xc001a00780) Stream added, broadcasting: 3 I0311 11:22:50.531312 6 log.go:172] (0xc0026422c0) Reply frame received for 3 I0311 11:22:50.531341 6 log.go:172] (0xc0026422c0) (0xc0012f2320) Create stream I0311 11:22:50.531350 6 log.go:172] (0xc0026422c0) (0xc0012f2320) Stream added, broadcasting: 5 I0311 11:22:50.532203 6 log.go:172] (0xc0026422c0) Reply frame received for 5 I0311 11:22:50.601106 6 log.go:172] (0xc0026422c0) Data frame received for 5 I0311 11:22:50.601142 6 log.go:172] (0xc0012f2320) (5) Data frame handling I0311 11:22:50.601168 6 log.go:172] (0xc0026422c0) Data frame received for 3 I0311 11:22:50.601185 6 log.go:172] (0xc001a00780) (3) Data frame handling I0311 11:22:50.601201 6 log.go:172] (0xc001a00780) (3) Data frame sent I0311 11:22:50.601209 6 log.go:172] (0xc0026422c0) Data frame received for 3 I0311 11:22:50.601220 6 log.go:172] (0xc001a00780) (3) Data frame handling I0311 11:22:50.602106 6 log.go:172] (0xc0026422c0) Data frame received for 1 I0311 11:22:50.602182 6 log.go:172] (0xc0012f2280) (1) Data frame handling I0311 11:22:50.602199 6 log.go:172] (0xc0012f2280) (1) Data frame sent I0311 11:22:50.602214 6 log.go:172] (0xc0026422c0) (0xc0012f2280) Stream removed, broadcasting: 1 I0311 11:22:50.602233 6 log.go:172] (0xc0026422c0) Go away received I0311 11:22:50.602385 6 log.go:172] (0xc0026422c0) (0xc0012f2280) Stream removed, broadcasting: 1 I0311 11:22:50.602409 6 log.go:172] (0xc0026422c0) (0xc001a00780) Stream removed, broadcasting: 3 I0311 11:22:50.602417 6 log.go:172] (0xc0026422c0) (0xc0012f2320) Stream removed, broadcasting: 5 Mar 11 11:22:50.602: INFO: Exec stderr: "" Mar 11 11:22:50.602: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.602: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.627411 6 log.go:172] (0xc002014790) (0xc00084d360) Create stream I0311 11:22:50.627437 6 log.go:172] (0xc002014790) (0xc00084d360) Stream added, broadcasting: 1 I0311 11:22:50.629480 6 log.go:172] (0xc002014790) Reply frame received for 1 I0311 11:22:50.629514 6 log.go:172] (0xc002014790) (0xc00084d400) Create stream I0311 11:22:50.629527 6 log.go:172] (0xc002014790) (0xc00084d400) Stream added, broadcasting: 3 I0311 11:22:50.630479 6 log.go:172] (0xc002014790) Reply frame received for 3 I0311 11:22:50.630519 6 log.go:172] (0xc002014790) (0xc00084d4a0) Create stream I0311 11:22:50.630531 6 log.go:172] (0xc002014790) (0xc00084d4a0) Stream added, broadcasting: 5 I0311 11:22:50.631303 6 log.go:172] (0xc002014790) Reply frame received for 5 I0311 11:22:50.697032 6 log.go:172] (0xc002014790) Data frame received for 3 I0311 11:22:50.697078 6 log.go:172] (0xc002014790) Data frame received for 5 I0311 11:22:50.697120 6 log.go:172] (0xc00084d4a0) (5) Data frame handling I0311 11:22:50.697147 6 log.go:172] (0xc00084d400) (3) Data frame handling I0311 11:22:50.697162 6 log.go:172] (0xc00084d400) (3) Data frame sent I0311 11:22:50.697174 6 log.go:172] (0xc002014790) Data frame received for 3 I0311 11:22:50.697184 6 log.go:172] (0xc00084d400) (3) Data frame handling I0311 11:22:50.698392 6 log.go:172] (0xc002014790) Data frame received for 1 I0311 11:22:50.698427 6 log.go:172] (0xc00084d360) (1) Data frame handling I0311 11:22:50.698488 6 log.go:172] (0xc00084d360) (1) Data frame sent I0311 11:22:50.698521 6 log.go:172] (0xc002014790) (0xc00084d360) Stream removed, broadcasting: 1 I0311 11:22:50.698551 6 log.go:172] (0xc002014790) Go away received I0311 11:22:50.698648 6 log.go:172] (0xc002014790) (0xc00084d360) Stream removed, broadcasting: 1 I0311 11:22:50.698674 6 log.go:172] (0xc002014790) (0xc00084d400) Stream removed, broadcasting: 3 I0311 11:22:50.698689 6 log.go:172] (0xc002014790) (0xc00084d4a0) Stream removed, broadcasting: 5 Mar 11 11:22:50.698: INFO: Exec stderr: "" Mar 11 11:22:50.698: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.698: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.725690 6 log.go:172] (0xc000d9a8f0) (0xc0011a3cc0) Create stream I0311 11:22:50.725713 6 log.go:172] (0xc000d9a8f0) (0xc0011a3cc0) Stream added, broadcasting: 1 I0311 11:22:50.729620 6 log.go:172] (0xc000d9a8f0) Reply frame received for 1 I0311 11:22:50.729662 6 log.go:172] (0xc000d9a8f0) (0xc0011a3d60) Create stream I0311 11:22:50.729679 6 log.go:172] (0xc000d9a8f0) (0xc0011a3d60) Stream added, broadcasting: 3 I0311 11:22:50.732851 6 log.go:172] (0xc000d9a8f0) Reply frame received for 3 I0311 11:22:50.732898 6 log.go:172] (0xc000d9a8f0) (0xc001a00960) Create stream I0311 11:22:50.732915 6 log.go:172] (0xc000d9a8f0) (0xc001a00960) Stream added, broadcasting: 5 I0311 11:22:50.733965 6 log.go:172] (0xc000d9a8f0) Reply frame received for 5 I0311 11:22:50.824320 6 log.go:172] (0xc000d9a8f0) Data frame received for 5 I0311 11:22:50.824353 6 log.go:172] (0xc001a00960) (5) Data frame handling I0311 11:22:50.824375 6 log.go:172] (0xc000d9a8f0) Data frame received for 3 I0311 11:22:50.824384 6 log.go:172] (0xc0011a3d60) (3) Data frame handling I0311 11:22:50.824396 6 log.go:172] (0xc0011a3d60) (3) Data frame sent I0311 11:22:50.824405 6 log.go:172] (0xc000d9a8f0) Data frame received for 3 I0311 11:22:50.824413 6 log.go:172] (0xc0011a3d60) (3) Data frame handling I0311 11:22:50.826019 6 log.go:172] (0xc000d9a8f0) Data frame received for 1 I0311 11:22:50.826048 6 log.go:172] (0xc0011a3cc0) (1) Data frame handling I0311 11:22:50.826065 6 log.go:172] (0xc0011a3cc0) (1) Data frame sent I0311 11:22:50.826074 6 log.go:172] (0xc000d9a8f0) (0xc0011a3cc0) Stream removed, broadcasting: 1 I0311 11:22:50.826088 6 log.go:172] (0xc000d9a8f0) Go away received I0311 11:22:50.826206 6 log.go:172] (0xc000d9a8f0) (0xc0011a3cc0) Stream removed, broadcasting: 1 I0311 11:22:50.826224 6 log.go:172] (0xc000d9a8f0) (0xc0011a3d60) Stream removed, broadcasting: 3 I0311 11:22:50.826253 6 log.go:172] (0xc000d9a8f0) (0xc001a00960) Stream removed, broadcasting: 5 Mar 11 11:22:50.826: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 11 11:22:50.826: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.826: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.847477 6 log.go:172] (0xc00191a2c0) (0xc001a00be0) Create stream I0311 11:22:50.847499 6 log.go:172] (0xc00191a2c0) (0xc001a00be0) Stream added, broadcasting: 1 I0311 11:22:50.849314 6 log.go:172] (0xc00191a2c0) Reply frame received for 1 I0311 11:22:50.849341 6 log.go:172] (0xc00191a2c0) (0xc000646aa0) Create stream I0311 11:22:50.849348 6 log.go:172] (0xc00191a2c0) (0xc000646aa0) Stream added, broadcasting: 3 I0311 11:22:50.849931 6 log.go:172] (0xc00191a2c0) Reply frame received for 3 I0311 11:22:50.849961 6 log.go:172] (0xc00191a2c0) (0xc00084d5e0) Create stream I0311 11:22:50.849972 6 log.go:172] (0xc00191a2c0) (0xc00084d5e0) Stream added, broadcasting: 5 I0311 11:22:50.850616 6 log.go:172] (0xc00191a2c0) Reply frame received for 5 I0311 11:22:50.908456 6 log.go:172] (0xc00191a2c0) Data frame received for 5 I0311 11:22:50.908488 6 log.go:172] (0xc00084d5e0) (5) Data frame handling I0311 11:22:50.908510 6 log.go:172] (0xc00191a2c0) Data frame received for 3 I0311 11:22:50.908519 6 log.go:172] (0xc000646aa0) (3) Data frame handling I0311 11:22:50.908529 6 log.go:172] (0xc000646aa0) (3) Data frame sent I0311 11:22:50.908539 6 log.go:172] (0xc00191a2c0) Data frame received for 3 I0311 11:22:50.908547 6 log.go:172] (0xc000646aa0) (3) Data frame handling I0311 11:22:50.909527 6 log.go:172] (0xc00191a2c0) Data frame received for 1 I0311 11:22:50.909539 6 log.go:172] (0xc001a00be0) (1) Data frame handling I0311 11:22:50.909551 6 log.go:172] (0xc001a00be0) (1) Data frame sent I0311 11:22:50.909562 6 log.go:172] (0xc00191a2c0) (0xc001a00be0) Stream removed, broadcasting: 1 I0311 11:22:50.909582 6 log.go:172] (0xc00191a2c0) Go away received I0311 11:22:50.909628 6 log.go:172] (0xc00191a2c0) (0xc001a00be0) Stream removed, broadcasting: 1 I0311 11:22:50.909639 6 log.go:172] (0xc00191a2c0) (0xc000646aa0) Stream removed, broadcasting: 3 I0311 11:22:50.909649 6 log.go:172] (0xc00191a2c0) (0xc00084d5e0) Stream removed, broadcasting: 5 Mar 11 11:22:50.909: INFO: Exec stderr: "" Mar 11 11:22:50.909: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.909: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:50.935296 6 log.go:172] (0xc0006ffd90) (0xc00119e000) Create stream I0311 11:22:50.935323 6 log.go:172] (0xc0006ffd90) (0xc00119e000) Stream added, broadcasting: 1 I0311 11:22:50.936673 6 log.go:172] (0xc0006ffd90) Reply frame received for 1 I0311 11:22:50.936701 6 log.go:172] (0xc0006ffd90) (0xc0014d4000) Create stream I0311 11:22:50.936712 6 log.go:172] (0xc0006ffd90) (0xc0014d4000) Stream added, broadcasting: 3 I0311 11:22:50.937413 6 log.go:172] (0xc0006ffd90) Reply frame received for 3 I0311 11:22:50.937467 6 log.go:172] (0xc0006ffd90) (0xc0014d40a0) Create stream I0311 11:22:50.937496 6 log.go:172] (0xc0006ffd90) (0xc0014d40a0) Stream added, broadcasting: 5 I0311 11:22:50.938330 6 log.go:172] (0xc0006ffd90) Reply frame received for 5 I0311 11:22:50.987837 6 log.go:172] (0xc0006ffd90) Data frame received for 5 I0311 11:22:50.987878 6 log.go:172] (0xc0014d40a0) (5) Data frame handling I0311 11:22:50.987906 6 log.go:172] (0xc0006ffd90) Data frame received for 3 I0311 11:22:50.987933 6 log.go:172] (0xc0014d4000) (3) Data frame handling I0311 11:22:50.987945 6 log.go:172] (0xc0014d4000) (3) Data frame sent I0311 11:22:50.987954 6 log.go:172] (0xc0006ffd90) Data frame received for 3 I0311 11:22:50.987958 6 log.go:172] (0xc0014d4000) (3) Data frame handling I0311 11:22:50.989121 6 log.go:172] (0xc0006ffd90) Data frame received for 1 I0311 11:22:50.989139 6 log.go:172] (0xc00119e000) (1) Data frame handling I0311 11:22:50.989154 6 log.go:172] (0xc00119e000) (1) Data frame sent I0311 11:22:50.989184 6 log.go:172] (0xc0006ffd90) (0xc00119e000) Stream removed, broadcasting: 1 I0311 11:22:50.989202 6 log.go:172] (0xc0006ffd90) Go away received I0311 11:22:50.989270 6 log.go:172] (0xc0006ffd90) (0xc00119e000) Stream removed, broadcasting: 1 I0311 11:22:50.989285 6 log.go:172] (0xc0006ffd90) (0xc0014d4000) Stream removed, broadcasting: 3 I0311 11:22:50.989294 6 log.go:172] (0xc0006ffd90) (0xc0014d40a0) Stream removed, broadcasting: 5 Mar 11 11:22:50.989: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 11 11:22:50.989: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:50.989: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:51.012519 6 log.go:172] (0xc000df7290) (0xc000fae6e0) Create stream I0311 11:22:51.012543 6 log.go:172] (0xc000df7290) (0xc000fae6e0) Stream added, broadcasting: 1 I0311 11:22:51.013959 6 log.go:172] (0xc000df7290) Reply frame received for 1 I0311 11:22:51.013986 6 log.go:172] (0xc000df7290) (0xc00119e140) Create stream I0311 11:22:51.013995 6 log.go:172] (0xc000df7290) (0xc00119e140) Stream added, broadcasting: 3 I0311 11:22:51.014869 6 log.go:172] (0xc000df7290) Reply frame received for 3 I0311 11:22:51.014903 6 log.go:172] (0xc000df7290) (0xc00119e1e0) Create stream I0311 11:22:51.014913 6 log.go:172] (0xc000df7290) (0xc00119e1e0) Stream added, broadcasting: 5 I0311 11:22:51.015739 6 log.go:172] (0xc000df7290) Reply frame received for 5 I0311 11:22:51.067196 6 log.go:172] (0xc000df7290) Data frame received for 5 I0311 11:22:51.067223 6 log.go:172] (0xc00119e1e0) (5) Data frame handling I0311 11:22:51.067239 6 log.go:172] (0xc000df7290) Data frame received for 3 I0311 11:22:51.067245 6 log.go:172] (0xc00119e140) (3) Data frame handling I0311 11:22:51.067252 6 log.go:172] (0xc00119e140) (3) Data frame sent I0311 11:22:51.067258 6 log.go:172] (0xc000df7290) Data frame received for 3 I0311 11:22:51.067263 6 log.go:172] (0xc00119e140) (3) Data frame handling I0311 11:22:51.068315 6 log.go:172] (0xc000df7290) Data frame received for 1 I0311 11:22:51.068327 6 log.go:172] (0xc000fae6e0) (1) Data frame handling I0311 11:22:51.068332 6 log.go:172] (0xc000fae6e0) (1) Data frame sent I0311 11:22:51.068341 6 log.go:172] (0xc000df7290) (0xc000fae6e0) Stream removed, broadcasting: 1 I0311 11:22:51.068417 6 log.go:172] (0xc000df7290) (0xc000fae6e0) Stream removed, broadcasting: 1 I0311 11:22:51.068443 6 log.go:172] (0xc000df7290) Go away received I0311 11:22:51.068476 6 log.go:172] (0xc000df7290) (0xc00119e140) Stream removed, broadcasting: 3 I0311 11:22:51.068491 6 log.go:172] (0xc000df7290) (0xc00119e1e0) Stream removed, broadcasting: 5 Mar 11 11:22:51.068: INFO: Exec stderr: "" Mar 11 11:22:51.068: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:51.068: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:51.089275 6 log.go:172] (0xc001f622c0) (0xc002016140) Create stream I0311 11:22:51.089294 6 log.go:172] (0xc001f622c0) (0xc002016140) Stream added, broadcasting: 1 I0311 11:22:51.090697 6 log.go:172] (0xc001f622c0) Reply frame received for 1 I0311 11:22:51.090719 6 log.go:172] (0xc001f622c0) (0xc0014d4140) Create stream I0311 11:22:51.090728 6 log.go:172] (0xc001f622c0) (0xc0014d4140) Stream added, broadcasting: 3 I0311 11:22:51.091250 6 log.go:172] (0xc001f622c0) Reply frame received for 3 I0311 11:22:51.091271 6 log.go:172] (0xc001f622c0) (0xc0020d6140) Create stream I0311 11:22:51.091278 6 log.go:172] (0xc001f622c0) (0xc0020d6140) Stream added, broadcasting: 5 I0311 11:22:51.091871 6 log.go:172] (0xc001f622c0) Reply frame received for 5 I0311 11:22:51.158515 6 log.go:172] (0xc001f622c0) Data frame received for 5 I0311 11:22:51.158535 6 log.go:172] (0xc001f622c0) Data frame received for 3 I0311 11:22:51.158551 6 log.go:172] (0xc0014d4140) (3) Data frame handling I0311 11:22:51.158560 6 log.go:172] (0xc0014d4140) (3) Data frame sent I0311 11:22:51.158566 6 log.go:172] (0xc001f622c0) Data frame received for 3 I0311 11:22:51.158572 6 log.go:172] (0xc0014d4140) (3) Data frame handling I0311 11:22:51.158589 6 log.go:172] (0xc0020d6140) (5) Data frame handling I0311 11:22:51.159108 6 log.go:172] (0xc001f622c0) Data frame received for 1 I0311 11:22:51.159118 6 log.go:172] (0xc002016140) (1) Data frame handling I0311 11:22:51.159126 6 log.go:172] (0xc002016140) (1) Data frame sent I0311 11:22:51.159135 6 log.go:172] (0xc001f622c0) (0xc002016140) Stream removed, broadcasting: 1 I0311 11:22:51.159144 6 log.go:172] (0xc001f622c0) Go away received I0311 11:22:51.159246 6 log.go:172] (0xc001f622c0) (0xc002016140) Stream removed, broadcasting: 1 I0311 11:22:51.159263 6 log.go:172] (0xc001f622c0) (0xc0014d4140) Stream removed, broadcasting: 3 I0311 11:22:51.159270 6 log.go:172] (0xc001f622c0) (0xc0020d6140) Stream removed, broadcasting: 5 Mar 11 11:22:51.159: INFO: Exec stderr: "" Mar 11 11:22:51.159: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:51.159: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:51.177618 6 log.go:172] (0xc000d9a840) (0xc00119e500) Create stream I0311 11:22:51.177651 6 log.go:172] (0xc000d9a840) (0xc00119e500) Stream added, broadcasting: 1 I0311 11:22:51.181614 6 log.go:172] (0xc000d9a840) Reply frame received for 1 I0311 11:22:51.181646 6 log.go:172] (0xc000d9a840) (0xc0020161e0) Create stream I0311 11:22:51.181655 6 log.go:172] (0xc000d9a840) (0xc0020161e0) Stream added, broadcasting: 3 I0311 11:22:51.183901 6 log.go:172] (0xc000d9a840) Reply frame received for 3 I0311 11:22:51.183939 6 log.go:172] (0xc000d9a840) (0xc002016280) Create stream I0311 11:22:51.183950 6 log.go:172] (0xc000d9a840) (0xc002016280) Stream added, broadcasting: 5 I0311 11:22:51.184630 6 log.go:172] (0xc000d9a840) Reply frame received for 5 I0311 11:22:51.230655 6 log.go:172] (0xc000d9a840) Data frame received for 3 I0311 11:22:51.230676 6 log.go:172] (0xc0020161e0) (3) Data frame handling I0311 11:22:51.230683 6 log.go:172] (0xc0020161e0) (3) Data frame sent I0311 11:22:51.230690 6 log.go:172] (0xc000d9a840) Data frame received for 3 I0311 11:22:51.230696 6 log.go:172] (0xc0020161e0) (3) Data frame handling I0311 11:22:51.230715 6 log.go:172] (0xc000d9a840) Data frame received for 5 I0311 11:22:51.230725 6 log.go:172] (0xc002016280) (5) Data frame handling I0311 11:22:51.231463 6 log.go:172] (0xc000d9a840) Data frame received for 1 I0311 11:22:51.231480 6 log.go:172] (0xc00119e500) (1) Data frame handling I0311 11:22:51.231493 6 log.go:172] (0xc00119e500) (1) Data frame sent I0311 11:22:51.231505 6 log.go:172] (0xc000d9a840) (0xc00119e500) Stream removed, broadcasting: 1 I0311 11:22:51.231575 6 log.go:172] (0xc000d9a840) (0xc00119e500) Stream removed, broadcasting: 1 I0311 11:22:51.231588 6 log.go:172] (0xc000d9a840) (0xc0020161e0) Stream removed, broadcasting: 3 I0311 11:22:51.231598 6 log.go:172] (0xc000d9a840) (0xc002016280) Stream removed, broadcasting: 5 Mar 11 11:22:51.231: INFO: Exec stderr: "" Mar 11 11:22:51.231: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x8kfw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:22:51.231: INFO: >>> kubeConfig: /root/.kube/config I0311 11:22:51.233481 6 log.go:172] (0xc000d9a840) Go away received I0311 11:22:51.248564 6 log.go:172] (0xc0011c40b0) (0xc0020d6280) Create stream I0311 11:22:51.248583 6 log.go:172] (0xc0011c40b0) (0xc0020d6280) Stream added, broadcasting: 1 I0311 11:22:51.249437 6 log.go:172] (0xc0011c40b0) Reply frame received for 1 I0311 11:22:51.249460 6 log.go:172] (0xc0011c40b0) (0xc0020d6320) Create stream I0311 11:22:51.249468 6 log.go:172] (0xc0011c40b0) (0xc0020d6320) Stream added, broadcasting: 3 I0311 11:22:51.249998 6 log.go:172] (0xc0011c40b0) Reply frame received for 3 I0311 11:22:51.250016 6 log.go:172] (0xc0011c40b0) (0xc0020d63c0) Create stream I0311 11:22:51.250023 6 log.go:172] (0xc0011c40b0) (0xc0020d63c0) Stream added, broadcasting: 5 I0311 11:22:51.250676 6 log.go:172] (0xc0011c40b0) Reply frame received for 5 I0311 11:22:51.306903 6 log.go:172] (0xc0011c40b0) Data frame received for 5 I0311 11:22:51.306925 6 log.go:172] (0xc0020d63c0) (5) Data frame handling I0311 11:22:51.306940 6 log.go:172] (0xc0011c40b0) Data frame received for 3 I0311 11:22:51.306946 6 log.go:172] (0xc0020d6320) (3) Data frame handling I0311 11:22:51.306954 6 log.go:172] (0xc0020d6320) (3) Data frame sent I0311 11:22:51.306960 6 log.go:172] (0xc0011c40b0) Data frame received for 3 I0311 11:22:51.306965 6 log.go:172] (0xc0020d6320) (3) Data frame handling I0311 11:22:51.307915 6 log.go:172] (0xc0011c40b0) Data frame received for 1 I0311 11:22:51.307932 6 log.go:172] (0xc0020d6280) (1) Data frame handling I0311 11:22:51.307941 6 log.go:172] (0xc0020d6280) (1) Data frame sent I0311 11:22:51.307952 6 log.go:172] (0xc0011c40b0) (0xc0020d6280) Stream removed, broadcasting: 1 I0311 11:22:51.308008 6 log.go:172] (0xc0011c40b0) (0xc0020d6280) Stream removed, broadcasting: 1 I0311 11:22:51.308019 6 log.go:172] (0xc0011c40b0) (0xc0020d6320) Stream removed, broadcasting: 3 I0311 11:22:51.308030 6 log.go:172] (0xc0011c40b0) (0xc0020d63c0) Stream removed, broadcasting: 5 Mar 11 11:22:51.308: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:22:51.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0311 11:22:51.308271 6 log.go:172] (0xc0011c40b0) Go away received STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-x8kfw" for this suite. Mar 11 11:23:29.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:23:29.335: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-x8kfw, resource: bindings, ignored listing per whitelist Mar 11 11:23:29.400: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-x8kfw deletion completed in 38.089827674s • [SLOW TEST:47.108 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:23:29.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:23:29.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-b7dhm' Mar 11 11:23:29.660: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 11:23:29.660: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 11 11:23:31.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-b7dhm' Mar 11 11:23:31.822: INFO: stderr: "" Mar 11 11:23:31.822: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:23:31.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b7dhm" for this suite. Mar 11 11:23:53.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:23:53.863: INFO: namespace: e2e-tests-kubectl-b7dhm, resource: bindings, ignored listing per whitelist Mar 11 11:23:53.929: INFO: namespace e2e-tests-kubectl-b7dhm deletion completed in 22.103066776s • [SLOW TEST:24.529 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:23:53.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-x4nkn Mar 11 11:23:56.012: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-x4nkn STEP: checking the pod's current state and verifying that restartCount is present Mar 11 11:23:56.014: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:27:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-x4nkn" for this suite. Mar 11 11:28:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:28:02.819: INFO: namespace: e2e-tests-container-probe-x4nkn, resource: bindings, ignored listing per whitelist Mar 11 11:28:02.819: INFO: namespace e2e-tests-container-probe-x4nkn deletion completed in 6.131707649s • [SLOW TEST:248.890 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:28:02.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-stghz STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 11:28:02.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 11:28:23.016: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.150:8080/dial?request=hostName&protocol=http&host=10.244.1.149&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-stghz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:28:23.016: INFO: >>> kubeConfig: /root/.kube/config I0311 11:28:23.052848 6 log.go:172] (0xc000df71e0) (0xc0011a2000) Create stream I0311 11:28:23.052879 6 log.go:172] (0xc000df71e0) (0xc0011a2000) Stream added, broadcasting: 1 I0311 11:28:23.055088 6 log.go:172] (0xc000df71e0) Reply frame received for 1 I0311 11:28:23.055135 6 log.go:172] (0xc000df71e0) (0xc000fafcc0) Create stream I0311 11:28:23.055153 6 log.go:172] (0xc000df71e0) (0xc000fafcc0) Stream added, broadcasting: 3 I0311 11:28:23.056170 6 log.go:172] (0xc000df71e0) Reply frame received for 3 I0311 11:28:23.056204 6 log.go:172] (0xc000df71e0) (0xc0014d55e0) Create stream I0311 11:28:23.056217 6 log.go:172] (0xc000df71e0) (0xc0014d55e0) Stream added, broadcasting: 5 I0311 11:28:23.057019 6 log.go:172] (0xc000df71e0) Reply frame received for 5 I0311 11:28:23.120108 6 log.go:172] (0xc000df71e0) Data frame received for 3 I0311 11:28:23.120135 6 log.go:172] (0xc000fafcc0) (3) Data frame handling I0311 11:28:23.120150 6 log.go:172] (0xc000fafcc0) (3) Data frame sent I0311 11:28:23.120593 6 log.go:172] (0xc000df71e0) Data frame received for 3 I0311 11:28:23.120615 6 log.go:172] (0xc000fafcc0) (3) Data frame handling I0311 11:28:23.120851 6 log.go:172] (0xc000df71e0) Data frame received for 5 I0311 11:28:23.120872 6 log.go:172] (0xc0014d55e0) (5) Data frame handling I0311 11:28:23.122997 6 log.go:172] (0xc000df71e0) Data frame received for 1 I0311 11:28:23.123018 6 log.go:172] (0xc0011a2000) (1) Data frame handling I0311 11:28:23.123028 6 log.go:172] (0xc0011a2000) (1) Data frame sent I0311 11:28:23.123042 6 log.go:172] (0xc000df71e0) (0xc0011a2000) Stream removed, broadcasting: 1 I0311 11:28:23.123065 6 log.go:172] (0xc000df71e0) Go away received I0311 11:28:23.123136 6 log.go:172] (0xc000df71e0) (0xc0011a2000) Stream removed, broadcasting: 1 I0311 11:28:23.123155 6 log.go:172] (0xc000df71e0) (0xc000fafcc0) Stream removed, broadcasting: 3 I0311 11:28:23.123167 6 log.go:172] (0xc000df71e0) (0xc0014d55e0) Stream removed, broadcasting: 5 Mar 11 11:28:23.123: INFO: Waiting for endpoints: map[] Mar 11 11:28:23.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.150:8080/dial?request=hostName&protocol=http&host=10.244.2.69&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-stghz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 11:28:23.126: INFO: >>> kubeConfig: /root/.kube/config I0311 11:28:23.154254 6 log.go:172] (0xc000d9a790) (0xc0014d5b80) Create stream I0311 11:28:23.154280 6 log.go:172] (0xc000d9a790) (0xc0014d5b80) Stream added, broadcasting: 1 I0311 11:28:23.158214 6 log.go:172] (0xc000d9a790) Reply frame received for 1 I0311 11:28:23.158263 6 log.go:172] (0xc000d9a790) (0xc000fafd60) Create stream I0311 11:28:23.158283 6 log.go:172] (0xc000d9a790) (0xc000fafd60) Stream added, broadcasting: 3 I0311 11:28:23.159515 6 log.go:172] (0xc000d9a790) Reply frame received for 3 I0311 11:28:23.159539 6 log.go:172] (0xc000d9a790) (0xc0014d5c20) Create stream I0311 11:28:23.159547 6 log.go:172] (0xc000d9a790) (0xc0014d5c20) Stream added, broadcasting: 5 I0311 11:28:23.161161 6 log.go:172] (0xc000d9a790) Reply frame received for 5 I0311 11:28:23.221527 6 log.go:172] (0xc000d9a790) Data frame received for 3 I0311 11:28:23.221550 6 log.go:172] (0xc000fafd60) (3) Data frame handling I0311 11:28:23.221562 6 log.go:172] (0xc000fafd60) (3) Data frame sent I0311 11:28:23.222041 6 log.go:172] (0xc000d9a790) Data frame received for 5 I0311 11:28:23.222066 6 log.go:172] (0xc0014d5c20) (5) Data frame handling I0311 11:28:23.222088 6 log.go:172] (0xc000d9a790) Data frame received for 3 I0311 11:28:23.222096 6 log.go:172] (0xc000fafd60) (3) Data frame handling I0311 11:28:23.223831 6 log.go:172] (0xc000d9a790) Data frame received for 1 I0311 11:28:23.223866 6 log.go:172] (0xc0014d5b80) (1) Data frame handling I0311 11:28:23.223900 6 log.go:172] (0xc0014d5b80) (1) Data frame sent I0311 11:28:23.224136 6 log.go:172] (0xc000d9a790) (0xc0014d5b80) Stream removed, broadcasting: 1 I0311 11:28:23.224172 6 log.go:172] (0xc000d9a790) Go away received I0311 11:28:23.224263 6 log.go:172] (0xc000d9a790) (0xc0014d5b80) Stream removed, broadcasting: 1 I0311 11:28:23.224290 6 log.go:172] (0xc000d9a790) (0xc000fafd60) Stream removed, broadcasting: 3 I0311 11:28:23.224309 6 log.go:172] (0xc000d9a790) (0xc0014d5c20) Stream removed, broadcasting: 5 Mar 11 11:28:23.224: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:28:23.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-stghz" for this suite. Mar 11 11:28:45.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:28:45.297: INFO: namespace: e2e-tests-pod-network-test-stghz, resource: bindings, ignored listing per whitelist Mar 11 11:28:45.314: INFO: namespace e2e-tests-pod-network-test-stghz deletion completed in 22.086741562s • [SLOW TEST:42.496 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:28:45.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-780c18c6-638b-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:28:45.411: INFO: Waiting up to 5m0s for pod "pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-qlpbm" to be "success or failure" Mar 11 11:28:45.416: INFO: Pod "pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.709647ms Mar 11 11:28:47.420: INFO: Pod "pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008570238s STEP: Saw pod success Mar 11 11:28:47.420: INFO: Pod "pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:28:47.422: INFO: Trying to get logs from node hunter-worker pod pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:28:47.465: INFO: Waiting for pod pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:28:47.478: INFO: Pod pod-secrets-780ca4bd-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:28:47.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qlpbm" for this suite. Mar 11 11:28:53.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:28:53.552: INFO: namespace: e2e-tests-secrets-qlpbm, resource: bindings, ignored listing per whitelist Mar 11 11:28:53.573: INFO: namespace e2e-tests-secrets-qlpbm deletion completed in 6.092071668s • [SLOW TEST:8.259 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:28:53.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:28:53.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-kkh84" to be "success or failure" Mar 11 11:28:53.691: INFO: Pod "downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.394862ms Mar 11 11:28:55.694: INFO: Pod "downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020568435s STEP: Saw pod success Mar 11 11:28:55.694: INFO: Pod "downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:28:55.696: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:28:55.711: INFO: Waiting for pod downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:28:55.715: INFO: Pod downwardapi-volume-7cfb9c68-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:28:55.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kkh84" for this suite. Mar 11 11:29:01.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:29:01.759: INFO: namespace: e2e-tests-projected-kkh84, resource: bindings, ignored listing per whitelist Mar 11 11:29:01.775: INFO: namespace e2e-tests-projected-kkh84 deletion completed in 6.057927743s • [SLOW TEST:8.202 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:29:01.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-81df6f4d-638b-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:29:01.896: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-tndhp" to be "success or failure" Mar 11 11:29:01.901: INFO: Pod "pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.815129ms Mar 11 11:29:03.904: INFO: Pod "pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008021382s STEP: Saw pod success Mar 11 11:29:03.904: INFO: Pod "pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:29:03.906: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Mar 11 11:29:03.941: INFO: Waiting for pod pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:29:03.949: INFO: Pod pod-projected-secrets-81e1db2f-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:29:03.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tndhp" for this suite. Mar 11 11:29:09.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:29:10.063: INFO: namespace: e2e-tests-projected-tndhp, resource: bindings, ignored listing per whitelist Mar 11 11:29:10.069: INFO: namespace e2e-tests-projected-tndhp deletion completed in 6.116163679s • [SLOW TEST:8.293 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:29:10.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-86cd5161-638b-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:29:10.178: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-xpzvh" to be "success or failure" Mar 11 11:29:10.196: INFO: Pod "pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.740769ms Mar 11 11:29:12.199: INFO: Pod "pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02120656s STEP: Saw pod success Mar 11 11:29:12.199: INFO: Pod "pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:29:12.202: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:29:12.220: INFO: Waiting for pod pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:29:12.225: INFO: Pod pod-projected-secrets-86ceedf8-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:29:12.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xpzvh" for this suite. Mar 11 11:29:18.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:29:18.287: INFO: namespace: e2e-tests-projected-xpzvh, resource: bindings, ignored listing per whitelist Mar 11 11:29:18.327: INFO: namespace e2e-tests-projected-xpzvh deletion completed in 6.099569734s • [SLOW TEST:8.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:29:18.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:29:18.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-n6h8c" to be "success or failure" Mar 11 11:29:18.448: INFO: Pod "downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.770839ms Mar 11 11:29:20.451: INFO: Pod "downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026548871s STEP: Saw pod success Mar 11 11:29:20.451: INFO: Pod "downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:29:20.454: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:29:20.489: INFO: Waiting for pod downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:29:20.494: INFO: Pod downwardapi-volume-8bbc1a2f-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:29:20.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n6h8c" for this suite. Mar 11 11:29:26.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:29:26.560: INFO: namespace: e2e-tests-projected-n6h8c, resource: bindings, ignored listing per whitelist Mar 11 11:29:26.567: INFO: namespace e2e-tests-projected-n6h8c deletion completed in 6.070608102s • [SLOW TEST:8.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:29:26.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 11 11:29:26.629: INFO: Waiting up to 5m0s for pod "pod-90a05328-638b-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-x64h5" to be "success or failure" Mar 11 11:29:26.632: INFO: Pod "pod-90a05328-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637129ms Mar 11 11:29:28.636: INFO: Pod "pod-90a05328-638b-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00704337s Mar 11 11:29:30.640: INFO: Pod "pod-90a05328-638b-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01110648s STEP: Saw pod success Mar 11 11:29:30.640: INFO: Pod "pod-90a05328-638b-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:29:30.643: INFO: Trying to get logs from node hunter-worker pod pod-90a05328-638b-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:29:30.671: INFO: Waiting for pod pod-90a05328-638b-11ea-bacb-0242ac11000a to disappear Mar 11 11:29:30.692: INFO: Pod pod-90a05328-638b-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:29:30.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-x64h5" for this suite. Mar 11 11:29:36.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:29:36.718: INFO: namespace: e2e-tests-emptydir-x64h5, resource: bindings, ignored listing per whitelist Mar 11 11:29:36.793: INFO: namespace e2e-tests-emptydir-x64h5 deletion completed in 6.097990974s • [SLOW TEST:10.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:29:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:29:36.882: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 11 11:29:36.901: INFO: Number of nodes with available pods: 0 Mar 11 11:29:36.901: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 11 11:29:36.943: INFO: Number of nodes with available pods: 0 Mar 11 11:29:36.943: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:37.947: INFO: Number of nodes with available pods: 0 Mar 11 11:29:37.947: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:38.947: INFO: Number of nodes with available pods: 1 Mar 11 11:29:38.947: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 11 11:29:38.985: INFO: Number of nodes with available pods: 1 Mar 11 11:29:38.985: INFO: Number of running nodes: 0, number of available pods: 1 Mar 11 11:29:39.990: INFO: Number of nodes with available pods: 0 Mar 11 11:29:39.990: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 11 11:29:39.999: INFO: Number of nodes with available pods: 0 Mar 11 11:29:39.999: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:41.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:41.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:42.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:42.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:43.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:43.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:44.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:44.004: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:45.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:45.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:46.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:46.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:47.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:47.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:48.003: INFO: Number of nodes with available pods: 0 Mar 11 11:29:48.003: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:49.002: INFO: Number of nodes with available pods: 0 Mar 11 11:29:49.002: INFO: Node hunter-worker is running more than one daemon pod Mar 11 11:29:50.003: INFO: Number of nodes with available pods: 1 Mar 11 11:29:50.003: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-z6swb, will wait for the garbage collector to delete the pods Mar 11 11:29:50.066: INFO: Deleting DaemonSet.extensions daemon-set took: 5.563239ms Mar 11 11:29:50.166: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.211861ms Mar 11 11:29:54.081: INFO: Number of nodes with available pods: 0 Mar 11 11:29:54.081: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 11:29:54.083: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-z6swb/daemonsets","resourceVersion":"507120"},"items":null} Mar 11 11:29:54.085: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-z6swb/pods","resourceVersion":"507120"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:29:54.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-z6swb" for this suite. Mar 11 11:30:00.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:30:00.239: INFO: namespace: e2e-tests-daemonsets-z6swb, resource: bindings, ignored listing per whitelist Mar 11 11:30:00.242: INFO: namespace e2e-tests-daemonsets-z6swb deletion completed in 6.128247951s • [SLOW TEST:23.448 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:30:00.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 11 11:30:00.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:02.076: INFO: stderr: "" Mar 11 11:30:02.076: INFO: stdout: "pod/pause created\n" Mar 11 11:30:02.076: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 11 11:30:02.076: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-qjjtp" to be "running and ready" Mar 11 11:30:02.112: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 35.717924ms Mar 11 11:30:04.115: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.038718931s Mar 11 11:30:04.115: INFO: Pod "pause" satisfied condition "running and ready" Mar 11 11:30:04.115: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 11 11:30:04.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.233: INFO: stderr: "" Mar 11 11:30:04.233: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 11 11:30:04.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.332: INFO: stderr: "" Mar 11 11:30:04.332: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 11 11:30:04.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.407: INFO: stderr: "" Mar 11 11:30:04.407: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 11 11:30:04.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.482: INFO: stderr: "" Mar 11 11:30:04.483: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 11 11:30:04.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.594: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 11:30:04.594: INFO: stdout: "pod \"pause\" force deleted\n" Mar 11 11:30:04.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-qjjtp' Mar 11 11:30:04.676: INFO: stderr: "No resources found.\n" Mar 11 11:30:04.676: INFO: stdout: "" Mar 11 11:30:04.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-qjjtp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 11:30:04.743: INFO: stderr: "" Mar 11 11:30:04.743: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:30:04.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qjjtp" for this suite. Mar 11 11:30:10.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:30:10.787: INFO: namespace: e2e-tests-kubectl-qjjtp, resource: bindings, ignored listing per whitelist Mar 11 11:30:10.846: INFO: namespace e2e-tests-kubectl-qjjtp deletion completed in 6.100827225s • [SLOW TEST:10.604 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:30:10.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 11 11:30:11.527: INFO: Pod name wrapped-volume-race-ab5fe2b9-638b-11ea-bacb-0242ac11000a: Found 0 pods out of 5 Mar 11 11:30:16.531: INFO: Pod name wrapped-volume-race-ab5fe2b9-638b-11ea-bacb-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab5fe2b9-638b-11ea-bacb-0242ac11000a in namespace e2e-tests-emptydir-wrapper-5fzxx, will wait for the garbage collector to delete the pods Mar 11 11:32:08.614: INFO: Deleting ReplicationController wrapped-volume-race-ab5fe2b9-638b-11ea-bacb-0242ac11000a took: 7.650591ms Mar 11 11:32:08.815: INFO: Terminating ReplicationController wrapped-volume-race-ab5fe2b9-638b-11ea-bacb-0242ac11000a pods took: 200.354807ms STEP: Creating RC which spawns configmap-volume pods Mar 11 11:32:48.340: INFO: Pod name wrapped-volume-race-08d7cec3-638c-11ea-bacb-0242ac11000a: Found 0 pods out of 5 Mar 11 11:32:53.346: INFO: Pod name wrapped-volume-race-08d7cec3-638c-11ea-bacb-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08d7cec3-638c-11ea-bacb-0242ac11000a in namespace e2e-tests-emptydir-wrapper-5fzxx, will wait for the garbage collector to delete the pods Mar 11 11:35:27.425: INFO: Deleting ReplicationController wrapped-volume-race-08d7cec3-638c-11ea-bacb-0242ac11000a took: 4.939086ms Mar 11 11:35:27.625: INFO: Terminating ReplicationController wrapped-volume-race-08d7cec3-638c-11ea-bacb-0242ac11000a pods took: 200.207674ms STEP: Creating RC which spawns configmap-volume pods Mar 11 11:36:07.950: INFO: Pod name wrapped-volume-race-7fd1f842-638c-11ea-bacb-0242ac11000a: Found 0 pods out of 5 Mar 11 11:36:12.957: INFO: Pod name wrapped-volume-race-7fd1f842-638c-11ea-bacb-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7fd1f842-638c-11ea-bacb-0242ac11000a in namespace e2e-tests-emptydir-wrapper-5fzxx, will wait for the garbage collector to delete the pods Mar 11 11:38:47.045: INFO: Deleting ReplicationController wrapped-volume-race-7fd1f842-638c-11ea-bacb-0242ac11000a took: 7.311331ms Mar 11 11:38:47.145: INFO: Terminating ReplicationController wrapped-volume-race-7fd1f842-638c-11ea-bacb-0242ac11000a pods took: 100.227402ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:39:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-5fzxx" for this suite. Mar 11 11:39:37.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:39:37.531: INFO: namespace: e2e-tests-emptydir-wrapper-5fzxx, resource: bindings, ignored listing per whitelist Mar 11 11:39:37.546: INFO: namespace e2e-tests-emptydir-wrapper-5fzxx deletion completed in 8.105128926s • [SLOW TEST:566.699 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:39:37.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 11 11:39:37.626: INFO: namespace e2e-tests-kubectl-swwwj Mar 11 11:39:37.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-swwwj' Mar 11 11:39:37.904: INFO: stderr: "" Mar 11 11:39:37.904: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 11:39:38.908: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:39:38.908: INFO: Found 0 / 1 Mar 11 11:39:39.914: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:39:39.914: INFO: Found 1 / 1 Mar 11 11:39:39.914: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 11:39:39.917: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:39:39.917: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 11:39:39.917: INFO: wait on redis-master startup in e2e-tests-kubectl-swwwj Mar 11 11:39:39.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-h9nqm redis-master --namespace=e2e-tests-kubectl-swwwj' Mar 11 11:39:40.068: INFO: stderr: "" Mar 11 11:39:40.068: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 11:39:39.111 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 11:39:39.111 # Server started, Redis version 3.2.12\n1:M 11 Mar 11:39:39.112 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 11:39:39.112 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 11 11:39:40.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-swwwj' Mar 11 11:39:40.195: INFO: stderr: "" Mar 11 11:39:40.195: INFO: stdout: "service/rm2 exposed\n" Mar 11 11:39:40.201: INFO: Service rm2 in namespace e2e-tests-kubectl-swwwj found. STEP: exposing service Mar 11 11:39:42.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-swwwj' Mar 11 11:39:42.341: INFO: stderr: "" Mar 11 11:39:42.341: INFO: stdout: "service/rm3 exposed\n" Mar 11 11:39:42.345: INFO: Service rm3 in namespace e2e-tests-kubectl-swwwj found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:39:44.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-swwwj" for this suite. Mar 11 11:40:06.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:06.383: INFO: namespace: e2e-tests-kubectl-swwwj, resource: bindings, ignored listing per whitelist Mar 11 11:40:06.439: INFO: namespace e2e-tests-kubectl-swwwj deletion completed in 22.083411149s • [SLOW TEST:28.893 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:06.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 11 11:40:06.514: INFO: Waiting up to 5m0s for pod "pod-0e070cef-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-xlmft" to be "success or failure" Mar 11 11:40:06.519: INFO: Pod "pod-0e070cef-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.560502ms Mar 11 11:40:08.523: INFO: Pod "pod-0e070cef-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008274741s STEP: Saw pod success Mar 11 11:40:08.523: INFO: Pod "pod-0e070cef-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:40:08.525: INFO: Trying to get logs from node hunter-worker pod pod-0e070cef-638d-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:40:08.561: INFO: Waiting for pod pod-0e070cef-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:40:08.567: INFO: Pod pod-0e070cef-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:08.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xlmft" for this suite. Mar 11 11:40:14.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:14.630: INFO: namespace: e2e-tests-emptydir-xlmft, resource: bindings, ignored listing per whitelist Mar 11 11:40:14.668: INFO: namespace e2e-tests-emptydir-xlmft deletion completed in 6.097504399s • [SLOW TEST:8.228 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:14.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 11 11:40:14.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 11 11:40:14.920: INFO: stderr: "" Mar 11 11:40:14.920: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d766l" for this suite. Mar 11 11:40:20.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:21.030: INFO: namespace: e2e-tests-kubectl-d766l, resource: bindings, ignored listing per whitelist Mar 11 11:40:21.039: INFO: namespace e2e-tests-kubectl-d766l deletion completed in 6.115675734s • [SLOW TEST:6.371 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:21.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-16c2f434-638d-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:40:21.172: INFO: Waiting up to 5m0s for pod "pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-rqqjt" to be "success or failure" Mar 11 11:40:21.176: INFO: Pod "pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.667991ms Mar 11 11:40:23.180: INFO: Pod "pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008160845s STEP: Saw pod success Mar 11 11:40:23.180: INFO: Pod "pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:40:23.183: INFO: Trying to get logs from node hunter-worker pod pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a container secret-env-test: STEP: delete the pod Mar 11 11:40:23.202: INFO: Waiting for pod pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:40:23.255: INFO: Pod pod-secrets-16c3479b-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:23.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rqqjt" for this suite. Mar 11 11:40:29.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:29.348: INFO: namespace: e2e-tests-secrets-rqqjt, resource: bindings, ignored listing per whitelist Mar 11 11:40:29.371: INFO: namespace e2e-tests-secrets-rqqjt deletion completed in 6.112478141s • [SLOW TEST:8.332 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:29.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 11 11:40:29.444: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:29.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zfsp5" for this suite. Mar 11 11:40:35.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:35.575: INFO: namespace: e2e-tests-kubectl-zfsp5, resource: bindings, ignored listing per whitelist Mar 11 11:40:35.598: INFO: namespace e2e-tests-kubectl-zfsp5 deletion completed in 6.086694059s • [SLOW TEST:6.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:35.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 11 11:40:36.226: INFO: created pod pod-service-account-defaultsa Mar 11 11:40:36.226: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 11 11:40:36.232: INFO: created pod pod-service-account-mountsa Mar 11 11:40:36.232: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 11 11:40:36.260: INFO: created pod pod-service-account-nomountsa Mar 11 11:40:36.260: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 11 11:40:36.282: INFO: created pod pod-service-account-defaultsa-mountspec Mar 11 11:40:36.282: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 11 11:40:36.356: INFO: created pod pod-service-account-mountsa-mountspec Mar 11 11:40:36.356: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 11 11:40:36.366: INFO: created pod pod-service-account-nomountsa-mountspec Mar 11 11:40:36.366: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 11 11:40:36.390: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 11 11:40:36.390: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 11 11:40:36.420: INFO: created pod pod-service-account-mountsa-nomountspec Mar 11 11:40:36.420: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 11 11:40:36.425: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 11 11:40:36.425: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:36.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-zx5nm" for this suite. Mar 11 11:40:42.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:42.628: INFO: namespace: e2e-tests-svcaccounts-zx5nm, resource: bindings, ignored listing per whitelist Mar 11 11:40:42.638: INFO: namespace e2e-tests-svcaccounts-zx5nm deletion completed in 6.1576332s • [SLOW TEST:7.041 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:42.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 11 11:40:42.718: INFO: Waiting up to 5m0s for pod "pod-239ad2e2-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-rsft7" to be "success or failure" Mar 11 11:40:42.722: INFO: Pod "pod-239ad2e2-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.687544ms Mar 11 11:40:44.727: INFO: Pod "pod-239ad2e2-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009732527s STEP: Saw pod success Mar 11 11:40:44.727: INFO: Pod "pod-239ad2e2-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:40:44.730: INFO: Trying to get logs from node hunter-worker pod pod-239ad2e2-638d-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:40:44.762: INFO: Waiting for pod pod-239ad2e2-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:40:44.766: INFO: Pod pod-239ad2e2-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:44.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rsft7" for this suite. Mar 11 11:40:50.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:50.855: INFO: namespace: e2e-tests-emptydir-rsft7, resource: bindings, ignored listing per whitelist Mar 11 11:40:50.889: INFO: namespace e2e-tests-emptydir-rsft7 deletion completed in 6.11934353s • [SLOW TEST:8.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:50.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:40:50.992: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 11 11:40:50.996: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tzkhs/daemonsets","resourceVersion":"509087"},"items":null} Mar 11 11:40:50.998: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tzkhs/pods","resourceVersion":"509087"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:40:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tzkhs" for this suite. Mar 11 11:40:57.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:40:57.241: INFO: namespace: e2e-tests-daemonsets-tzkhs, resource: bindings, ignored listing per whitelist Mar 11 11:40:57.254: INFO: namespace e2e-tests-daemonsets-tzkhs deletion completed in 6.247960347s S [SKIPPING] [6.365 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:40:50.992: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:40:57.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-2c4eb7d3-638d-11ea-bacb-0242ac11000a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2c4eb7d3-638d-11ea-bacb-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:41:01.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6h95z" for this suite. Mar 11 11:41:23.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:41:23.469: INFO: namespace: e2e-tests-configmap-6h95z, resource: bindings, ignored listing per whitelist Mar 11 11:41:23.502: INFO: namespace e2e-tests-configmap-6h95z deletion completed in 22.101333837s • [SLOW TEST:26.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:41:23.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-3bfa0dfc-638d-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:41:23.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-96lbt" to be "success or failure" Mar 11 11:41:23.640: INFO: Pod "pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323862ms Mar 11 11:41:25.648: INFO: Pod "pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.010241133s Mar 11 11:41:27.660: INFO: Pod "pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022429467s STEP: Saw pod success Mar 11 11:41:27.660: INFO: Pod "pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:41:27.663: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:41:27.678: INFO: Waiting for pod pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:41:27.683: INFO: Pod pod-projected-configmaps-3bfabd90-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:41:27.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-96lbt" for this suite. Mar 11 11:41:33.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:41:33.758: INFO: namespace: e2e-tests-projected-96lbt, resource: bindings, ignored listing per whitelist Mar 11 11:41:33.768: INFO: namespace e2e-tests-projected-96lbt deletion completed in 6.081951859s • [SLOW TEST:10.266 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:41:33.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-9r9q2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9r9q2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.208_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-9r9q2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-9r9q2.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-9r9q2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9r9q2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 208.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.208_udp@PTR;check="$$(dig +tcp +noall +answer +search 208.246.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.246.208_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 11:41:37.944: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.947: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.958: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.964: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.983: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.985: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.988: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.990: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.993: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.996: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:37.998: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:38.001: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:38.015: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:41:43.019: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.023: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.034: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.038: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.058: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.060: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.062: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.064: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.067: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.069: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.071: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.073: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:43.088: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:41:48.019: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.022: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.033: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.038: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.055: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.058: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.060: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.062: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.065: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.067: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.069: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.072: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:48.086: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:41:53.020: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.023: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.035: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.041: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.061: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.063: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.065: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.068: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.070: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.072: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.075: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.077: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:53.097: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:41:58.019: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.021: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.034: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.041: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.062: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.064: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.067: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.070: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.072: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.074: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.077: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.079: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:41:58.095: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:42:03.019: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.021: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.030: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.034: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.050: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.051: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.053: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.055: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.057: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.059: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.061: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.063: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc from pod e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a: the server could not find the requested resource (get pods dns-test-421e35c1-638d-11ea-bacb-0242ac11000a) Mar 11 11:42:03.078: INFO: Lookups using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-9r9q2 jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2 jessie_udp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@dns-test-service.e2e-tests-dns-9r9q2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-9r9q2.svc] Mar 11 11:42:08.088: INFO: DNS probes using e2e-tests-dns-9r9q2/dns-test-421e35c1-638d-11ea-bacb-0242ac11000a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:42:08.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-9r9q2" for this suite. Mar 11 11:42:14.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:14.291: INFO: namespace: e2e-tests-dns-9r9q2, resource: bindings, ignored listing per whitelist Mar 11 11:42:14.322: INFO: namespace e2e-tests-dns-9r9q2 deletion completed in 6.116683598s • [SLOW TEST:40.554 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:42:14.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5a40f346-638d-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:42:14.483: INFO: Waiting up to 5m0s for pod "pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-plfkt" to be "success or failure" Mar 11 11:42:14.485: INFO: Pod "pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.648848ms Mar 11 11:42:16.487: INFO: Pod "pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.003736912s STEP: Saw pod success Mar 11 11:42:16.487: INFO: Pod "pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:42:16.488: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:42:16.498: INFO: Waiting for pod pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:42:16.528: INFO: Pod pod-secrets-5a4bc4b6-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:42:16.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-plfkt" for this suite. Mar 11 11:42:22.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:22.574: INFO: namespace: e2e-tests-secrets-plfkt, resource: bindings, ignored listing per whitelist Mar 11 11:42:22.616: INFO: namespace e2e-tests-secrets-plfkt deletion completed in 6.08582096s STEP: Destroying namespace "e2e-tests-secret-namespace-dbz96" for this suite. Mar 11 11:42:28.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:28.639: INFO: namespace: e2e-tests-secret-namespace-dbz96, resource: bindings, ignored listing per whitelist Mar 11 11:42:28.710: INFO: namespace e2e-tests-secret-namespace-dbz96 deletion completed in 6.093955786s • [SLOW TEST:14.388 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:42:28.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 11 11:42:28.819: INFO: Waiting up to 5m0s for pod "client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-containers-5wnvv" to be "success or failure" Mar 11 11:42:28.822: INFO: Pod "client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.41835ms Mar 11 11:42:30.827: INFO: Pod "client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007865054s Mar 11 11:42:32.831: INFO: Pod "client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011855349s STEP: Saw pod success Mar 11 11:42:32.831: INFO: Pod "client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:42:32.834: INFO: Trying to get logs from node hunter-worker pod client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:42:32.853: INFO: Waiting for pod client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:42:32.858: INFO: Pod client-containers-62d8ace8-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:42:32.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-5wnvv" for this suite. Mar 11 11:42:38.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:38.914: INFO: namespace: e2e-tests-containers-5wnvv, resource: bindings, ignored listing per whitelist Mar 11 11:42:38.962: INFO: namespace e2e-tests-containers-5wnvv deletion completed in 6.099134356s • [SLOW TEST:10.251 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:42:38.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 11:42:45.082800 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 11:42:45.082: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:42:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7w7fx" for this suite. Mar 11 11:42:51.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:51.169: INFO: namespace: e2e-tests-gc-7w7fx, resource: bindings, ignored listing per whitelist Mar 11 11:42:51.175: INFO: namespace e2e-tests-gc-7w7fx deletion completed in 6.089483039s • [SLOW TEST:12.213 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:42:51.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:42:51.312: INFO: Creating deployment "test-recreate-deployment" Mar 11 11:42:51.319: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 11 11:42:51.333: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 11 11:42:53.339: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 11 11:42:53.341: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 11 11:42:53.345: INFO: Updating deployment test-recreate-deployment Mar 11 11:42:53.345: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 11 11:42:53.692: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jcppw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jcppw/deployments/test-recreate-deployment,UID:7042065e-638d-11ea-9978-0242ac11000d,ResourceVersion:509753,Generation:2,CreationTimestamp:2020-03-11 11:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-11 11:42:53 +0000 UTC 2020-03-11 11:42:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-11 11:42:53 +0000 UTC 2020-03-11 11:42:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 11 11:42:53.695: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jcppw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jcppw/replicasets/test-recreate-deployment-589c4bfd,UID:718df542-638d-11ea-9978-0242ac11000d,ResourceVersion:509751,Generation:1,CreationTimestamp:2020-03-11 11:42:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7042065e-638d-11ea-9978-0242ac11000d 0xc0027320cf 0xc0027320e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:42:53.695: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 11 11:42:53.695: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jcppw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jcppw/replicasets/test-recreate-deployment-5bf7f65dc,UID:70451690-638d-11ea-9978-0242ac11000d,ResourceVersion:509740,Generation:2,CreationTimestamp:2020-03-11 11:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 7042065e-638d-11ea-9978-0242ac11000d 0xc0027321a0 0xc0027321a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 11:42:53.699: INFO: Pod "test-recreate-deployment-589c4bfd-mpj6l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-mpj6l,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jcppw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jcppw/pods/test-recreate-deployment-589c4bfd-mpj6l,UID:7192c5d5-638d-11ea-9978-0242ac11000d,ResourceVersion:509754,Generation:0,CreationTimestamp:2020-03-11 11:42:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 718df542-638d-11ea-9978-0242ac11000d 0xc001c320ef 0xc001c32100}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2xhl4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2xhl4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2xhl4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c32190} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c321c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:42:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 11:42:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-11 11:42:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:42:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-jcppw" for this suite. Mar 11 11:42:59.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:42:59.758: INFO: namespace: e2e-tests-deployment-jcppw, resource: bindings, ignored listing per whitelist Mar 11 11:42:59.790: INFO: namespace e2e-tests-deployment-jcppw deletion completed in 6.088260939s • [SLOW TEST:8.615 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:42:59.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a Mar 11 11:42:59.871: INFO: Pod name my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a: Found 0 pods out of 1 Mar 11 11:43:04.875: INFO: Pod name my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a: Found 1 pods out of 1 Mar 11 11:43:04.875: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a" are running Mar 11 11:43:04.878: INFO: Pod "my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a-lg9gj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:42:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:43:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:43:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:42:59 +0000 UTC Reason: Message:}]) Mar 11 11:43:04.878: INFO: Trying to dial the pod Mar 11 11:43:09.888: INFO: Controller my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a: Got expected result from replica 1 [my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a-lg9gj]: "my-hostname-basic-755af591-638d-11ea-bacb-0242ac11000a-lg9gj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:43:09.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hwjd4" for this suite. Mar 11 11:43:15.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:43:15.963: INFO: namespace: e2e-tests-replication-controller-hwjd4, resource: bindings, ignored listing per whitelist Mar 11 11:43:15.990: INFO: namespace e2e-tests-replication-controller-hwjd4 deletion completed in 6.098634489s • [SLOW TEST:16.200 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:43:15.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jnzsd in namespace e2e-tests-proxy-xrhgb I0311 11:43:16.117710 6 runners.go:184] Created replication controller with name: proxy-service-jnzsd, namespace: e2e-tests-proxy-xrhgb, replica count: 1 I0311 11:43:17.168066 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 11:43:18.168258 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:19.168538 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:20.168720 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:21.168937 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:22.169185 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:23.169379 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 11:43:24.169607 6 runners.go:184] proxy-service-jnzsd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 11:43:24.172: INFO: setup took 8.098709754s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 11 11:43:24.177: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-xrhgb/pods/http:proxy-service-jnzsd-4jbgk:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:43:37.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lv7zt" for this suite. Mar 11 11:43:59.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:43:59.614: INFO: namespace: e2e-tests-replication-controller-lv7zt, resource: bindings, ignored listing per whitelist Mar 11 11:43:59.669: INFO: namespace e2e-tests-replication-controller-lv7zt deletion completed in 22.094726689s • [SLOW TEST:27.192 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:43:59.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-990f6ad4-638d-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:43:59.778: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-h9bts" to be "success or failure" Mar 11 11:43:59.783: INFO: Pod "pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.956837ms Mar 11 11:44:01.787: INFO: Pod "pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009048541s STEP: Saw pod success Mar 11 11:44:01.787: INFO: Pod "pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:44:01.790: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Mar 11 11:44:01.827: INFO: Waiting for pod pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:44:01.831: INFO: Pod pod-projected-secrets-990fc571-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:44:01.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h9bts" for this suite. Mar 11 11:44:07.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:44:07.938: INFO: namespace: e2e-tests-projected-h9bts, resource: bindings, ignored listing per whitelist Mar 11 11:44:07.945: INFO: namespace e2e-tests-projected-h9bts deletion completed in 6.110894693s • [SLOW TEST:8.276 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:44:07.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 11 11:44:08.061: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gh6xr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gh6xr/configmaps/e2e-watch-test-resource-version,UID:9dfcd924-638d-11ea-9978-0242ac11000d,ResourceVersion:510065,Generation:0,CreationTimestamp:2020-03-11 11:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 11:44:08.061: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-gh6xr,SelfLink:/api/v1/namespaces/e2e-tests-watch-gh6xr/configmaps/e2e-watch-test-resource-version,UID:9dfcd924-638d-11ea-9978-0242ac11000d,ResourceVersion:510066,Generation:0,CreationTimestamp:2020-03-11 11:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:44:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-gh6xr" for this suite. Mar 11 11:44:14.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:44:14.156: INFO: namespace: e2e-tests-watch-gh6xr, resource: bindings, ignored listing per whitelist Mar 11 11:44:14.169: INFO: namespace e2e-tests-watch-gh6xr deletion completed in 6.103689849s • [SLOW TEST:6.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:44:14.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 11 11:44:16.802: INFO: Successfully updated pod "pod-update-a1b62f5a-638d-11ea-bacb-0242ac11000a" STEP: verifying the updated pod is in kubernetes Mar 11 11:44:16.815: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:44:16.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bv9xg" for this suite. Mar 11 11:44:38.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:44:38.912: INFO: namespace: e2e-tests-pods-bv9xg, resource: bindings, ignored listing per whitelist Mar 11 11:44:38.926: INFO: namespace e2e-tests-pods-bv9xg deletion completed in 22.10859632s • [SLOW TEST:24.758 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:44:38.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:44:39.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-2qjwk" to be "success or failure" Mar 11 11:44:39.018: INFO: Pod "downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.948914ms Mar 11 11:44:41.021: INFO: Pod "downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005534194s Mar 11 11:44:43.025: INFO: Pod "downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009045235s STEP: Saw pod success Mar 11 11:44:43.025: INFO: Pod "downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:44:43.027: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:44:43.053: INFO: Waiting for pod downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:44:43.059: INFO: Pod downwardapi-volume-b0737401-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:44:43.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2qjwk" for this suite. Mar 11 11:44:49.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:44:49.141: INFO: namespace: e2e-tests-projected-2qjwk, resource: bindings, ignored listing per whitelist Mar 11 11:44:49.167: INFO: namespace e2e-tests-projected-2qjwk deletion completed in 6.10533273s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:44:49.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 11 11:44:49.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bffw9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 11 11:44:52.313: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0311 11:44:52.263110 1870 log.go:172] (0xc0005c6160) (0xc0008d4640) Create stream\nI0311 11:44:52.263154 1870 log.go:172] (0xc0005c6160) (0xc0008d4640) Stream added, broadcasting: 1\nI0311 11:44:52.267751 1870 log.go:172] (0xc0005c6160) Reply frame received for 1\nI0311 11:44:52.267799 1870 log.go:172] (0xc0005c6160) (0xc000994460) Create stream\nI0311 11:44:52.267808 1870 log.go:172] (0xc0005c6160) (0xc000994460) Stream added, broadcasting: 3\nI0311 11:44:52.271810 1870 log.go:172] (0xc0005c6160) Reply frame received for 3\nI0311 11:44:52.271868 1870 log.go:172] (0xc0005c6160) (0xc0007b8000) Create stream\nI0311 11:44:52.271883 1870 log.go:172] (0xc0005c6160) (0xc0007b8000) Stream added, broadcasting: 5\nI0311 11:44:52.273095 1870 log.go:172] (0xc0005c6160) Reply frame received for 5\nI0311 11:44:52.273142 1870 log.go:172] (0xc0005c6160) (0xc0008d46e0) Create stream\nI0311 11:44:52.273159 1870 log.go:172] (0xc0005c6160) (0xc0008d46e0) Stream added, broadcasting: 7\nI0311 11:44:52.275362 1870 log.go:172] (0xc0005c6160) Reply frame received for 7\nI0311 11:44:52.275461 1870 log.go:172] (0xc000994460) (3) Writing data frame\nI0311 11:44:52.275562 1870 log.go:172] (0xc000994460) (3) Writing data frame\nI0311 11:44:52.276662 1870 log.go:172] (0xc0005c6160) Data frame received for 5\nI0311 11:44:52.276678 1870 log.go:172] (0xc0007b8000) (5) Data frame handling\nI0311 11:44:52.276697 1870 log.go:172] (0xc0007b8000) (5) Data frame sent\nI0311 11:44:52.277312 1870 log.go:172] (0xc0005c6160) Data frame received for 5\nI0311 11:44:52.277330 1870 log.go:172] (0xc0007b8000) (5) Data frame handling\nI0311 11:44:52.277347 1870 log.go:172] (0xc0007b8000) (5) Data frame sent\nI0311 11:44:52.295561 1870 log.go:172] (0xc0005c6160) Data frame received for 5\nI0311 11:44:52.295588 1870 log.go:172] (0xc0007b8000) (5) Data frame handling\nI0311 11:44:52.295613 1870 log.go:172] (0xc0005c6160) Data frame received for 7\nI0311 11:44:52.295631 1870 log.go:172] (0xc0008d46e0) (7) Data frame handling\nI0311 11:44:52.295982 1870 log.go:172] (0xc0005c6160) Data frame received for 1\nI0311 11:44:52.296000 1870 log.go:172] (0xc0008d4640) (1) Data frame handling\nI0311 11:44:52.296011 1870 log.go:172] (0xc0008d4640) (1) Data frame sent\nI0311 11:44:52.296127 1870 log.go:172] (0xc0005c6160) (0xc0008d4640) Stream removed, broadcasting: 1\nI0311 11:44:52.296215 1870 log.go:172] (0xc0005c6160) (0xc0008d4640) Stream removed, broadcasting: 1\nI0311 11:44:52.296232 1870 log.go:172] (0xc0005c6160) (0xc000994460) Stream removed, broadcasting: 3\nI0311 11:44:52.296263 1870 log.go:172] (0xc0005c6160) (0xc0007b8000) Stream removed, broadcasting: 5\nI0311 11:44:52.296385 1870 log.go:172] (0xc0005c6160) (0xc0008d46e0) Stream removed, broadcasting: 7\n" Mar 11 11:44:52.313: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:44:54.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bffw9" for this suite. Mar 11 11:45:00.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:45:00.369: INFO: namespace: e2e-tests-kubectl-bffw9, resource: bindings, ignored listing per whitelist Mar 11 11:45:00.406: INFO: namespace e2e-tests-kubectl-bffw9 deletion completed in 6.074181271s • [SLOW TEST:11.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:45:00.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-bd3e5695-638d-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:45:00.485: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-k6wgg" to be "success or failure" Mar 11 11:45:00.501: INFO: Pod "pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.768223ms Mar 11 11:45:02.505: INFO: Pod "pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01956412s STEP: Saw pod success Mar 11 11:45:02.505: INFO: Pod "pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:45:02.507: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 11:45:02.522: INFO: Waiting for pod pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a to disappear Mar 11 11:45:02.526: INFO: Pod pod-configmaps-bd3efa39-638d-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:45:02.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k6wgg" for this suite. Mar 11 11:45:08.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:45:08.560: INFO: namespace: e2e-tests-configmap-k6wgg, resource: bindings, ignored listing per whitelist Mar 11 11:45:08.617: INFO: namespace e2e-tests-configmap-k6wgg deletion completed in 6.087542726s • [SLOW TEST:8.211 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:45:08.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-7f7nt Mar 11 11:45:10.722: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-7f7nt STEP: checking the pod's current state and verifying that restartCount is present Mar 11 11:45:10.725: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:49:11.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7f7nt" for this suite. Mar 11 11:49:17.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:49:17.540: INFO: namespace: e2e-tests-container-probe-7f7nt, resource: bindings, ignored listing per whitelist Mar 11 11:49:17.552: INFO: namespace e2e-tests-container-probe-7f7nt deletion completed in 6.099494173s • [SLOW TEST:248.935 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:49:17.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 11 11:49:17.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-csp4k' Mar 11 11:49:17.850: INFO: stderr: "" Mar 11 11:49:17.850: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 11:49:18.855: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:49:18.855: INFO: Found 0 / 1 Mar 11 11:49:19.853: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:49:19.853: INFO: Found 1 / 1 Mar 11 11:49:19.853: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 11 11:49:19.856: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:49:19.856: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 11:49:19.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wkccx --namespace=e2e-tests-kubectl-csp4k -p {"metadata":{"annotations":{"x":"y"}}}' Mar 11 11:49:19.974: INFO: stderr: "" Mar 11 11:49:19.974: INFO: stdout: "pod/redis-master-wkccx patched\n" STEP: checking annotations Mar 11 11:49:19.987: INFO: Selector matched 1 pods for map[app:redis] Mar 11 11:49:19.987: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:49:19.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-csp4k" for this suite. Mar 11 11:49:42.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:49:42.105: INFO: namespace: e2e-tests-kubectl-csp4k, resource: bindings, ignored listing per whitelist Mar 11 11:49:42.121: INFO: namespace e2e-tests-kubectl-csp4k deletion completed in 22.130715605s • [SLOW TEST:24.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:49:42.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 11 11:49:42.226: INFO: Waiting up to 5m0s for pod "downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-2s7j6" to be "success or failure" Mar 11 11:49:42.231: INFO: Pod "downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.269177ms Mar 11 11:49:44.235: INFO: Pod "downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009106405s STEP: Saw pod success Mar 11 11:49:44.235: INFO: Pod "downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:49:44.238: INFO: Trying to get logs from node hunter-worker pod downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:49:44.263: INFO: Waiting for pod downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:49:44.267: INFO: Pod downward-api-652dc5bb-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:49:44.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2s7j6" for this suite. Mar 11 11:49:50.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:49:50.328: INFO: namespace: e2e-tests-downward-api-2s7j6, resource: bindings, ignored listing per whitelist Mar 11 11:49:50.379: INFO: namespace e2e-tests-downward-api-2s7j6 deletion completed in 6.106807202s • [SLOW TEST:8.258 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:49:50.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6a1ae65e-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:49:50.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-gmvqx" to be "success or failure" Mar 11 11:49:50.500: INFO: Pod "pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495642ms Mar 11 11:49:52.504: INFO: Pod "pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008234348s STEP: Saw pod success Mar 11 11:49:52.504: INFO: Pod "pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:49:52.506: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 11:49:52.550: INFO: Waiting for pod pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:49:52.554: INFO: Pod pod-configmaps-6a1b411a-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:49:52.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gmvqx" for this suite. Mar 11 11:49:58.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:49:58.634: INFO: namespace: e2e-tests-configmap-gmvqx, resource: bindings, ignored listing per whitelist Mar 11 11:49:58.642: INFO: namespace e2e-tests-configmap-gmvqx deletion completed in 6.084422104s • [SLOW TEST:8.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:49:58.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:50:02.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-7l4cp" for this suite. Mar 11 11:50:52.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:50:52.837: INFO: namespace: e2e-tests-kubelet-test-7l4cp, resource: bindings, ignored listing per whitelist Mar 11 11:50:52.852: INFO: namespace e2e-tests-kubelet-test-7l4cp deletion completed in 50.097275165s • [SLOW TEST:54.210 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:50:52.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:50:52.956: INFO: Creating ReplicaSet my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a Mar 11 11:50:52.965: INFO: Pod name my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a: Found 0 pods out of 1 Mar 11 11:50:57.969: INFO: Pod name my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a: Found 1 pods out of 1 Mar 11 11:50:57.969: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a" is running Mar 11 11:50:57.972: INFO: Pod "my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a-klgfp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:50:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:50:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:50:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 11:50:52 +0000 UTC Reason: Message:}]) Mar 11 11:50:57.972: INFO: Trying to dial the pod Mar 11 11:51:02.983: INFO: Controller my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a: Got expected result from replica 1 [my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a-klgfp]: "my-hostname-basic-8f56e1ee-638e-11ea-bacb-0242ac11000a-klgfp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:02.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-nlx9w" for this suite. Mar 11 11:51:08.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:09.012: INFO: namespace: e2e-tests-replicaset-nlx9w, resource: bindings, ignored listing per whitelist Mar 11 11:51:09.082: INFO: namespace e2e-tests-replicaset-nlx9w deletion completed in 6.095561475s • [SLOW TEST:16.230 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:09.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:51:09.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-8kf4c" to be "success or failure" Mar 11 11:51:09.208: INFO: Pod "downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41359ms Mar 11 11:51:11.212: INFO: Pod "downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008381276s STEP: Saw pod success Mar 11 11:51:11.212: INFO: Pod "downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:51:11.215: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:51:11.246: INFO: Waiting for pod downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:51:11.251: INFO: Pod downwardapi-volume-99051235-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:11.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8kf4c" for this suite. Mar 11 11:51:17.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:17.285: INFO: namespace: e2e-tests-projected-8kf4c, resource: bindings, ignored listing per whitelist Mar 11 11:51:17.339: INFO: namespace e2e-tests-projected-8kf4c deletion completed in 6.084289938s • [SLOW TEST:8.257 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:17.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9de91357-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:51:17.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-fm4tw" to be "success or failure" Mar 11 11:51:17.431: INFO: Pod "pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.98537ms Mar 11 11:51:19.435: INFO: Pod "pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018467564s STEP: Saw pod success Mar 11 11:51:19.435: INFO: Pod "pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:51:19.437: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a container configmap-volume-test: STEP: delete the pod Mar 11 11:51:19.452: INFO: Waiting for pod pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:51:19.456: INFO: Pod pod-configmaps-9dea22c0-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:19.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fm4tw" for this suite. Mar 11 11:51:25.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:25.527: INFO: namespace: e2e-tests-configmap-fm4tw, resource: bindings, ignored listing per whitelist Mar 11 11:51:25.576: INFO: namespace e2e-tests-configmap-fm4tw deletion completed in 6.116635352s • [SLOW TEST:8.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:25.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a2da1e77-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:51:25.704: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-gszzr" to be "success or failure" Mar 11 11:51:25.708: INFO: Pod "pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596399ms Mar 11 11:51:27.712: INFO: Pod "pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008457918s STEP: Saw pod success Mar 11 11:51:27.712: INFO: Pod "pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:51:27.715: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Mar 11 11:51:27.753: INFO: Waiting for pod pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:51:27.766: INFO: Pod pod-projected-secrets-a2daac8e-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:27.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gszzr" for this suite. Mar 11 11:51:33.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:33.883: INFO: namespace: e2e-tests-projected-gszzr, resource: bindings, ignored listing per whitelist Mar 11 11:51:33.894: INFO: namespace e2e-tests-projected-gszzr deletion completed in 6.125179993s • [SLOW TEST:8.318 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:33.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a7cc2174-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:51:34.001: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-p4hbz" to be "success or failure" Mar 11 11:51:34.016: INFO: Pod "pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.961237ms Mar 11 11:51:36.020: INFO: Pod "pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018821356s STEP: Saw pod success Mar 11 11:51:36.020: INFO: Pod "pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:51:36.022: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:51:36.043: INFO: Waiting for pod pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:51:36.048: INFO: Pod pod-projected-configmaps-a7ccbd15-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:36.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p4hbz" for this suite. Mar 11 11:51:42.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:42.116: INFO: namespace: e2e-tests-projected-p4hbz, resource: bindings, ignored listing per whitelist Mar 11 11:51:42.178: INFO: namespace e2e-tests-projected-p4hbz deletion completed in 6.126518257s • [SLOW TEST:8.283 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:42.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-acbd448f-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:51:42.290: INFO: Waiting up to 5m0s for pod "pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-tkc7t" to be "success or failure" Mar 11 11:51:42.296: INFO: Pod "pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14331ms Mar 11 11:51:44.298: INFO: Pod "pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008009156s STEP: Saw pod success Mar 11 11:51:44.298: INFO: Pod "pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:51:44.300: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:51:44.321: INFO: Waiting for pod pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:51:44.365: INFO: Pod pod-secrets-acbda2d3-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:51:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tkc7t" for this suite. Mar 11 11:51:50.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:51:50.422: INFO: namespace: e2e-tests-secrets-tkc7t, resource: bindings, ignored listing per whitelist Mar 11 11:51:50.460: INFO: namespace e2e-tests-secrets-tkc7t deletion completed in 6.091537714s • [SLOW TEST:8.282 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:51:50.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-4htmg STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-4htmg STEP: Deleting pre-stop pod Mar 11 11:52:01.636: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:52:01.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-4htmg" for this suite. Mar 11 11:52:39.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:52:39.727: INFO: namespace: e2e-tests-prestop-4htmg, resource: bindings, ignored listing per whitelist Mar 11 11:52:39.787: INFO: namespace e2e-tests-prestop-4htmg deletion completed in 38.132270169s • [SLOW TEST:49.327 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:52:39.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 11 11:52:39.876: INFO: Waiting up to 5m0s for pod "var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-var-expansion-7gwss" to be "success or failure" Mar 11 11:52:39.881: INFO: Pod "var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.59355ms Mar 11 11:52:41.885: INFO: Pod "var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.009047466s Mar 11 11:52:43.889: INFO: Pod "var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012908054s STEP: Saw pod success Mar 11 11:52:43.889: INFO: Pod "var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:52:43.891: INFO: Trying to get logs from node hunter-worker pod var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:52:43.931: INFO: Waiting for pod var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:52:43.935: INFO: Pod var-expansion-cf0ee28d-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:52:43.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7gwss" for this suite. Mar 11 11:52:49.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:52:50.012: INFO: namespace: e2e-tests-var-expansion-7gwss, resource: bindings, ignored listing per whitelist Mar 11 11:52:50.056: INFO: namespace e2e-tests-var-expansion-7gwss deletion completed in 6.117556261s • [SLOW TEST:10.269 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:52:50.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-v5lk STEP: Creating a pod to test atomic-volume-subpath Mar 11 11:52:50.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v5lk" in namespace "e2e-tests-subpath-q7kd9" to be "success or failure" Mar 11 11:52:50.171: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.747987ms Mar 11 11:52:52.175: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01642449s Mar 11 11:52:54.178: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 4.020051747s Mar 11 11:52:56.183: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 6.024132215s Mar 11 11:52:58.186: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 8.02774518s Mar 11 11:53:00.190: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 10.031924005s Mar 11 11:53:02.194: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 12.03594138s Mar 11 11:53:04.199: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 14.040530353s Mar 11 11:53:06.203: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 16.044932492s Mar 11 11:53:08.207: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 18.048897602s Mar 11 11:53:10.211: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 20.053088782s Mar 11 11:53:12.215: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Running", Reason="", readiness=false. Elapsed: 22.057028399s Mar 11 11:53:14.220: INFO: Pod "pod-subpath-test-secret-v5lk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061119438s STEP: Saw pod success Mar 11 11:53:14.220: INFO: Pod "pod-subpath-test-secret-v5lk" satisfied condition "success or failure" Mar 11 11:53:14.222: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-v5lk container test-container-subpath-secret-v5lk: STEP: delete the pod Mar 11 11:53:14.254: INFO: Waiting for pod pod-subpath-test-secret-v5lk to disappear Mar 11 11:53:14.259: INFO: Pod pod-subpath-test-secret-v5lk no longer exists STEP: Deleting pod pod-subpath-test-secret-v5lk Mar 11 11:53:14.259: INFO: Deleting pod "pod-subpath-test-secret-v5lk" in namespace "e2e-tests-subpath-q7kd9" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:14.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-q7kd9" for this suite. Mar 11 11:53:20.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:53:20.328: INFO: namespace: e2e-tests-subpath-q7kd9, resource: bindings, ignored listing per whitelist Mar 11 11:53:20.350: INFO: namespace e2e-tests-subpath-q7kd9 deletion completed in 6.085745647s • [SLOW TEST:30.293 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:53:20.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:53:20.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-w58lm' Mar 11 11:53:20.556: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 11:53:20.556: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 11 11:53:20.583: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-9fffn] Mar 11 11:53:20.583: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-9fffn" in namespace "e2e-tests-kubectl-w58lm" to be "running and ready" Mar 11 11:53:20.622: INFO: Pod "e2e-test-nginx-rc-9fffn": Phase="Pending", Reason="", readiness=false. Elapsed: 39.50007ms Mar 11 11:53:22.626: INFO: Pod "e2e-test-nginx-rc-9fffn": Phase="Running", Reason="", readiness=true. Elapsed: 2.043648727s Mar 11 11:53:22.626: INFO: Pod "e2e-test-nginx-rc-9fffn" satisfied condition "running and ready" Mar 11 11:53:22.626: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-9fffn] Mar 11 11:53:22.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w58lm' Mar 11 11:53:22.773: INFO: stderr: "" Mar 11 11:53:22.773: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 11 11:53:22.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-w58lm' Mar 11 11:53:22.885: INFO: stderr: "" Mar 11 11:53:22.885: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:22.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w58lm" for this suite. Mar 11 11:53:28.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:53:28.912: INFO: namespace: e2e-tests-kubectl-w58lm, resource: bindings, ignored listing per whitelist Mar 11 11:53:28.991: INFO: namespace e2e-tests-kubectl-w58lm deletion completed in 6.102794571s • [SLOW TEST:8.641 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:53:28.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 11 11:53:29.085: INFO: Waiting up to 5m0s for pod "downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-xg6ss" to be "success or failure" Mar 11 11:53:29.089: INFO: Pod "downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.691588ms Mar 11 11:53:31.093: INFO: Pod "downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007530857s Mar 11 11:53:33.096: INFO: Pod "downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011307407s STEP: Saw pod success Mar 11 11:53:33.097: INFO: Pod "downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:53:33.099: INFO: Trying to get logs from node hunter-worker pod downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 11:53:33.114: INFO: Waiting for pod downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:53:33.119: INFO: Pod downward-api-ec64c26c-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:33.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xg6ss" for this suite. Mar 11 11:53:39.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:53:39.197: INFO: namespace: e2e-tests-downward-api-xg6ss, resource: bindings, ignored listing per whitelist Mar 11 11:53:39.261: INFO: namespace e2e-tests-downward-api-xg6ss deletion completed in 6.138609813s • [SLOW TEST:10.270 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:53:39.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 11 11:53:39.366: INFO: Waiting up to 5m0s for pod "pod-f285f143-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-mtccd" to be "success or failure" Mar 11 11:53:39.370: INFO: Pod "pod-f285f143-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388983ms Mar 11 11:53:41.373: INFO: Pod "pod-f285f143-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00697848s STEP: Saw pod success Mar 11 11:53:41.373: INFO: Pod "pod-f285f143-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:53:41.375: INFO: Trying to get logs from node hunter-worker pod pod-f285f143-638e-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:53:41.393: INFO: Waiting for pod pod-f285f143-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:53:41.397: INFO: Pod pod-f285f143-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:41.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mtccd" for this suite. Mar 11 11:53:47.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:53:47.444: INFO: namespace: e2e-tests-emptydir-mtccd, resource: bindings, ignored listing per whitelist Mar 11 11:53:47.475: INFO: namespace e2e-tests-emptydir-mtccd deletion completed in 6.075286732s • [SLOW TEST:8.213 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:53:47.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f764fd6b-638e-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:53:47.569: INFO: Waiting up to 5m0s for pod "pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-n67rt" to be "success or failure" Mar 11 11:53:47.571: INFO: Pod "pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.902649ms Mar 11 11:53:49.575: INFO: Pod "pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005713495s STEP: Saw pod success Mar 11 11:53:49.575: INFO: Pod "pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:53:49.578: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:53:49.597: INFO: Waiting for pod pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:53:49.601: INFO: Pod pod-secrets-f7676ff0-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:49.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n67rt" for this suite. Mar 11 11:53:55.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:53:55.764: INFO: namespace: e2e-tests-secrets-n67rt, resource: bindings, ignored listing per whitelist Mar 11 11:53:55.768: INFO: namespace e2e-tests-secrets-n67rt deletion completed in 6.162560559s • [SLOW TEST:8.293 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:53:55.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:53:55.875: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-vq9np" to be "success or failure" Mar 11 11:53:55.880: INFO: Pod "downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801734ms Mar 11 11:53:57.884: INFO: Pod "downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008506087s Mar 11 11:53:59.888: INFO: Pod "downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012537835s STEP: Saw pod success Mar 11 11:53:59.888: INFO: Pod "downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:53:59.890: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:53:59.928: INFO: Waiting for pod downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a to disappear Mar 11 11:53:59.934: INFO: Pod downwardapi-volume-fc5d015a-638e-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:53:59.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vq9np" for this suite. Mar 11 11:54:05.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:54:05.983: INFO: namespace: e2e-tests-downward-api-vq9np, resource: bindings, ignored listing per whitelist Mar 11 11:54:06.017: INFO: namespace e2e-tests-downward-api-vq9np deletion completed in 6.078386717s • [SLOW TEST:10.246 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:54:06.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:54:10.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-md92b" for this suite. Mar 11 11:54:16.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:54:16.181: INFO: namespace: e2e-tests-kubelet-test-md92b, resource: bindings, ignored listing per whitelist Mar 11 11:54:16.223: INFO: namespace e2e-tests-kubelet-test-md92b deletion completed in 6.089079269s • [SLOW TEST:10.206 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:54:16.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-bjs6 STEP: Creating a pod to test atomic-volume-subpath Mar 11 11:54:16.326: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bjs6" in namespace "e2e-tests-subpath-59d2g" to be "success or failure" Mar 11 11:54:16.345: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.315459ms Mar 11 11:54:18.348: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021850717s Mar 11 11:54:20.351: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 4.024766749s Mar 11 11:54:22.354: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 6.028042603s Mar 11 11:54:24.358: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 8.031103628s Mar 11 11:54:26.361: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 10.034464742s Mar 11 11:54:28.364: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 12.037111719s Mar 11 11:54:30.366: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 14.039902589s Mar 11 11:54:32.369: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 16.042610573s Mar 11 11:54:34.384: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 18.057665707s Mar 11 11:54:36.388: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 20.061169809s Mar 11 11:54:38.391: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Running", Reason="", readiness=false. Elapsed: 22.064347417s Mar 11 11:54:40.405: INFO: Pod "pod-subpath-test-configmap-bjs6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078824769s STEP: Saw pod success Mar 11 11:54:40.405: INFO: Pod "pod-subpath-test-configmap-bjs6" satisfied condition "success or failure" Mar 11 11:54:40.407: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-bjs6 container test-container-subpath-configmap-bjs6: STEP: delete the pod Mar 11 11:54:40.421: INFO: Waiting for pod pod-subpath-test-configmap-bjs6 to disappear Mar 11 11:54:40.444: INFO: Pod pod-subpath-test-configmap-bjs6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-bjs6 Mar 11 11:54:40.444: INFO: Deleting pod "pod-subpath-test-configmap-bjs6" in namespace "e2e-tests-subpath-59d2g" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:54:40.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-59d2g" for this suite. Mar 11 11:54:46.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:54:46.539: INFO: namespace: e2e-tests-subpath-59d2g, resource: bindings, ignored listing per whitelist Mar 11 11:54:46.559: INFO: namespace e2e-tests-subpath-59d2g deletion completed in 6.110113906s • [SLOW TEST:30.335 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:54:46.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1aa02381-638f-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:54:46.676: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-4v8lq" to be "success or failure" Mar 11 11:54:46.680: INFO: Pod "pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954223ms Mar 11 11:54:48.684: INFO: Pod "pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007777829s STEP: Saw pod success Mar 11 11:54:48.684: INFO: Pod "pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:54:48.686: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Mar 11 11:54:48.717: INFO: Waiting for pod pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:54:48.722: INFO: Pod pod-projected-configmaps-1aa4ff4e-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:54:48.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4v8lq" for this suite. Mar 11 11:54:54.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:54:54.793: INFO: namespace: e2e-tests-projected-4v8lq, resource: bindings, ignored listing per whitelist Mar 11 11:54:54.808: INFO: namespace e2e-tests-projected-4v8lq deletion completed in 6.083037361s • [SLOW TEST:8.249 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:54:54.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 11 11:54:54.888: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:54:58.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-xwl8b" for this suite. Mar 11 11:55:04.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:04.361: INFO: namespace: e2e-tests-init-container-xwl8b, resource: bindings, ignored listing per whitelist Mar 11 11:55:04.424: INFO: namespace e2e-tests-init-container-xwl8b deletion completed in 6.082783428s • [SLOW TEST:9.616 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:55:04.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:55:04.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wj5pz" for this suite. Mar 11 11:55:26.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:26.661: INFO: namespace: e2e-tests-pods-wj5pz, resource: bindings, ignored listing per whitelist Mar 11 11:55:26.684: INFO: namespace e2e-tests-pods-wj5pz deletion completed in 22.130721675s • [SLOW TEST:22.260 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:55:26.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-k2lj8/configmap-test-328be617-638f-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume configMaps Mar 11 11:55:26.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-configmap-k2lj8" to be "success or failure" Mar 11 11:55:26.818: INFO: Pod "pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.58937ms Mar 11 11:55:28.821: INFO: Pod "pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026896002s STEP: Saw pod success Mar 11 11:55:28.821: INFO: Pod "pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:55:28.823: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a container env-test: STEP: delete the pod Mar 11 11:55:28.842: INFO: Waiting for pod pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:55:28.852: INFO: Pod pod-configmaps-328d6da5-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:55:28.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k2lj8" for this suite. Mar 11 11:55:34.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:34.931: INFO: namespace: e2e-tests-configmap-k2lj8, resource: bindings, ignored listing per whitelist Mar 11 11:55:34.940: INFO: namespace e2e-tests-configmap-k2lj8 deletion completed in 6.084661909s • [SLOW TEST:8.256 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:55:34.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 11 11:55:35.033: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix854698029/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:55:35.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jxjv7" for this suite. Mar 11 11:55:41.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:41.166: INFO: namespace: e2e-tests-kubectl-jxjv7, resource: bindings, ignored listing per whitelist Mar 11 11:55:41.179: INFO: namespace e2e-tests-kubectl-jxjv7 deletion completed in 6.088355718s • [SLOW TEST:6.239 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:55:41.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:55:47.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-f7cdh" for this suite. Mar 11 11:55:53.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:53.508: INFO: namespace: e2e-tests-namespaces-f7cdh, resource: bindings, ignored listing per whitelist Mar 11 11:55:53.538: INFO: namespace e2e-tests-namespaces-f7cdh deletion completed in 6.088766078s STEP: Destroying namespace "e2e-tests-nsdeletetest-ldg7q" for this suite. Mar 11 11:55:53.540: INFO: Namespace e2e-tests-nsdeletetest-ldg7q was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wz78g" for this suite. Mar 11 11:55:59.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:55:59.625: INFO: namespace: e2e-tests-nsdeletetest-wz78g, resource: bindings, ignored listing per whitelist Mar 11 11:55:59.656: INFO: namespace e2e-tests-nsdeletetest-wz78g deletion completed in 6.115627485s • [SLOW TEST:18.476 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:55:59.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4631bbfb-638f-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 11:55:59.781: INFO: Waiting up to 5m0s for pod "pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-fc7nm" to be "success or failure" Mar 11 11:55:59.789: INFO: Pod "pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398308ms Mar 11 11:56:01.793: INFO: Pod "pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012230359s STEP: Saw pod success Mar 11 11:56:01.793: INFO: Pod "pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:56:01.796: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a container secret-volume-test: STEP: delete the pod Mar 11 11:56:01.834: INFO: Waiting for pod pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:56:01.843: INFO: Pod pod-secrets-4632b071-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:56:01.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fc7nm" for this suite. Mar 11 11:56:07.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:56:07.948: INFO: namespace: e2e-tests-secrets-fc7nm, resource: bindings, ignored listing per whitelist Mar 11 11:56:07.964: INFO: namespace e2e-tests-secrets-fc7nm deletion completed in 6.116541986s • [SLOW TEST:8.308 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:56:07.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:56:10.155: INFO: Waiting up to 5m0s for pod "client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-pods-v8qpn" to be "success or failure" Mar 11 11:56:10.168: INFO: Pod "client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.469151ms Mar 11 11:56:12.171: INFO: Pod "client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015817393s Mar 11 11:56:14.175: INFO: Pod "client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019809194s STEP: Saw pod success Mar 11 11:56:14.175: INFO: Pod "client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:56:14.178: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a container env3cont: STEP: delete the pod Mar 11 11:56:14.200: INFO: Waiting for pod client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:56:14.203: INFO: Pod client-envvars-4c6719cb-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:56:14.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v8qpn" for this suite. Mar 11 11:56:58.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:56:58.284: INFO: namespace: e2e-tests-pods-v8qpn, resource: bindings, ignored listing per whitelist Mar 11 11:56:58.307: INFO: namespace e2e-tests-pods-v8qpn deletion completed in 44.100354026s • [SLOW TEST:50.343 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:56:58.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 11 11:56:58.390: INFO: Waiting up to 5m0s for pod "pod-6926d9fc-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-2426b" to be "success or failure" Mar 11 11:56:58.395: INFO: Pod "pod-6926d9fc-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488981ms Mar 11 11:57:00.452: INFO: Pod "pod-6926d9fc-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06137097s STEP: Saw pod success Mar 11 11:57:00.452: INFO: Pod "pod-6926d9fc-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:57:00.454: INFO: Trying to get logs from node hunter-worker2 pod pod-6926d9fc-638f-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:57:00.732: INFO: Waiting for pod pod-6926d9fc-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:57:00.743: INFO: Pod pod-6926d9fc-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:57:00.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2426b" for this suite. Mar 11 11:57:06.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:57:06.803: INFO: namespace: e2e-tests-emptydir-2426b, resource: bindings, ignored listing per whitelist Mar 11 11:57:06.834: INFO: namespace e2e-tests-emptydir-2426b deletion completed in 6.088030854s • [SLOW TEST:8.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:57:06.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 11:57:06.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-m6x8b' Mar 11 11:57:08.522: INFO: stderr: "" Mar 11 11:57:08.522: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 11 11:57:08.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-m6x8b' Mar 11 11:57:17.894: INFO: stderr: "" Mar 11 11:57:17.894: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:57:17.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m6x8b" for this suite. Mar 11 11:57:23.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:57:23.947: INFO: namespace: e2e-tests-kubectl-m6x8b, resource: bindings, ignored listing per whitelist Mar 11 11:57:23.989: INFO: namespace e2e-tests-kubectl-m6x8b deletion completed in 6.090975465s • [SLOW TEST:17.155 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:57:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 11 11:57:24.084: INFO: Waiting up to 5m0s for pod "pod-78776df6-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-xt6bd" to be "success or failure" Mar 11 11:57:24.105: INFO: Pod "pod-78776df6-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.520181ms Mar 11 11:57:26.108: INFO: Pod "pod-78776df6-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023804857s STEP: Saw pod success Mar 11 11:57:26.108: INFO: Pod "pod-78776df6-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:57:26.111: INFO: Trying to get logs from node hunter-worker pod pod-78776df6-638f-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:57:26.134: INFO: Waiting for pod pod-78776df6-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:57:26.139: INFO: Pod pod-78776df6-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:57:26.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xt6bd" for this suite. Mar 11 11:57:32.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:57:32.169: INFO: namespace: e2e-tests-emptydir-xt6bd, resource: bindings, ignored listing per whitelist Mar 11 11:57:32.249: INFO: namespace e2e-tests-emptydir-xt6bd deletion completed in 6.104575719s • [SLOW TEST:8.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:57:32.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:57:34.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-nzthd" for this suite. Mar 11 11:58:18.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:58:18.489: INFO: namespace: e2e-tests-kubelet-test-nzthd, resource: bindings, ignored listing per whitelist Mar 11 11:58:18.504: INFO: namespace e2e-tests-kubelet-test-nzthd deletion completed in 44.115094655s • [SLOW TEST:46.254 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:58:18.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 11:58:18.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-8vw8h" to be "success or failure" Mar 11 11:58:18.619: INFO: Pod "downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.659611ms Mar 11 11:58:20.642: INFO: Pod "downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040348867s STEP: Saw pod success Mar 11 11:58:20.642: INFO: Pod "downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:58:20.645: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 11:58:20.662: INFO: Waiting for pod downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:58:20.720: INFO: Pod downwardapi-volume-98f5623b-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:58:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8vw8h" for this suite. Mar 11 11:58:26.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:58:26.780: INFO: namespace: e2e-tests-downward-api-8vw8h, resource: bindings, ignored listing per whitelist Mar 11 11:58:26.815: INFO: namespace e2e-tests-downward-api-8vw8h deletion completed in 6.091406818s • [SLOW TEST:8.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:58:26.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:58:52.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-wdgqz" for this suite. Mar 11 11:58:58.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:58:58.322: INFO: namespace: e2e-tests-container-runtime-wdgqz, resource: bindings, ignored listing per whitelist Mar 11 11:58:58.399: INFO: namespace e2e-tests-container-runtime-wdgqz deletion completed in 6.122827716s • [SLOW TEST:31.584 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:58:58.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-mckq STEP: Creating a pod to test atomic-volume-subpath Mar 11 11:58:58.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mckq" in namespace "e2e-tests-subpath-vljkt" to be "success or failure" Mar 11 11:58:58.524: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.822942ms Mar 11 11:59:00.529: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02209296s Mar 11 11:59:02.533: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=true. Elapsed: 4.02619143s Mar 11 11:59:04.536: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 6.029961453s Mar 11 11:59:06.540: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 8.033823626s Mar 11 11:59:08.543: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 10.036837359s Mar 11 11:59:10.548: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 12.041193691s Mar 11 11:59:12.551: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 14.044972609s Mar 11 11:59:14.556: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 16.049081213s Mar 11 11:59:16.558: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 18.051963368s Mar 11 11:59:18.563: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 20.056360416s Mar 11 11:59:20.566: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Running", Reason="", readiness=false. Elapsed: 22.059638278s Mar 11 11:59:22.570: INFO: Pod "pod-subpath-test-configmap-mckq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063781688s STEP: Saw pod success Mar 11 11:59:22.570: INFO: Pod "pod-subpath-test-configmap-mckq" satisfied condition "success or failure" Mar 11 11:59:22.573: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-mckq container test-container-subpath-configmap-mckq: STEP: delete the pod Mar 11 11:59:22.591: INFO: Waiting for pod pod-subpath-test-configmap-mckq to disappear Mar 11 11:59:22.610: INFO: Pod pod-subpath-test-configmap-mckq no longer exists STEP: Deleting pod pod-subpath-test-configmap-mckq Mar 11 11:59:22.610: INFO: Deleting pod "pod-subpath-test-configmap-mckq" in namespace "e2e-tests-subpath-vljkt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:59:22.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vljkt" for this suite. Mar 11 11:59:28.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:59:28.671: INFO: namespace: e2e-tests-subpath-vljkt, resource: bindings, ignored listing per whitelist Mar 11 11:59:28.721: INFO: namespace e2e-tests-subpath-vljkt deletion completed in 6.086319524s • [SLOW TEST:30.321 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:59:28.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 11 11:59:28.837: INFO: Waiting up to 5m0s for pod "pod-c2d241a8-638f-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-ffk6w" to be "success or failure" Mar 11 11:59:28.873: INFO: Pod "pod-c2d241a8-638f-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.535829ms Mar 11 11:59:30.877: INFO: Pod "pod-c2d241a8-638f-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040234059s STEP: Saw pod success Mar 11 11:59:30.877: INFO: Pod "pod-c2d241a8-638f-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 11:59:30.880: INFO: Trying to get logs from node hunter-worker2 pod pod-c2d241a8-638f-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 11:59:30.896: INFO: Waiting for pod pod-c2d241a8-638f-11ea-bacb-0242ac11000a to disappear Mar 11 11:59:30.933: INFO: Pod pod-c2d241a8-638f-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:59:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ffk6w" for this suite. Mar 11 11:59:36.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 11:59:36.957: INFO: namespace: e2e-tests-emptydir-ffk6w, resource: bindings, ignored listing per whitelist Mar 11 11:59:37.018: INFO: namespace e2e-tests-emptydir-ffk6w deletion completed in 6.081819706s • [SLOW TEST:8.297 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 11:59:37.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 11:59:37.109: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 11:59:39.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7v76w" for this suite. Mar 11 12:00:19.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:00:19.294: INFO: namespace: e2e-tests-pods-7v76w, resource: bindings, ignored listing per whitelist Mar 11 12:00:19.355: INFO: namespace e2e-tests-pods-7v76w deletion completed in 40.092412144s • [SLOW TEST:42.337 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:00:19.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 12:00:19.433: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:00:21.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rd85h" for this suite. Mar 11 12:01:11.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:01:11.595: INFO: namespace: e2e-tests-pods-rd85h, resource: bindings, ignored listing per whitelist Mar 11 12:01:11.658: INFO: namespace e2e-tests-pods-rd85h deletion completed in 50.151030418s • [SLOW TEST:52.303 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:01:11.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cppt2 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 12:01:11.785: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 12:01:33.874: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.111:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cppt2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 12:01:33.874: INFO: >>> kubeConfig: /root/.kube/config I0311 12:01:33.909006 6 log.go:172] (0xc000df6fd0) (0xc001db4460) Create stream I0311 12:01:33.909035 6 log.go:172] (0xc000df6fd0) (0xc001db4460) Stream added, broadcasting: 1 I0311 12:01:33.912693 6 log.go:172] (0xc000df6fd0) Reply frame received for 1 I0311 12:01:33.912732 6 log.go:172] (0xc000df6fd0) (0xc00084c1e0) Create stream I0311 12:01:33.912745 6 log.go:172] (0xc000df6fd0) (0xc00084c1e0) Stream added, broadcasting: 3 I0311 12:01:33.913635 6 log.go:172] (0xc000df6fd0) Reply frame received for 3 I0311 12:01:33.913667 6 log.go:172] (0xc000df6fd0) (0xc001ac2000) Create stream I0311 12:01:33.913684 6 log.go:172] (0xc000df6fd0) (0xc001ac2000) Stream added, broadcasting: 5 I0311 12:01:33.914658 6 log.go:172] (0xc000df6fd0) Reply frame received for 5 I0311 12:01:33.990468 6 log.go:172] (0xc000df6fd0) Data frame received for 5 I0311 12:01:33.990513 6 log.go:172] (0xc001ac2000) (5) Data frame handling I0311 12:01:33.990539 6 log.go:172] (0xc000df6fd0) Data frame received for 3 I0311 12:01:33.990549 6 log.go:172] (0xc00084c1e0) (3) Data frame handling I0311 12:01:33.990562 6 log.go:172] (0xc00084c1e0) (3) Data frame sent I0311 12:01:33.990571 6 log.go:172] (0xc000df6fd0) Data frame received for 3 I0311 12:01:33.990579 6 log.go:172] (0xc00084c1e0) (3) Data frame handling I0311 12:01:33.992117 6 log.go:172] (0xc000df6fd0) Data frame received for 1 I0311 12:01:33.992133 6 log.go:172] (0xc001db4460) (1) Data frame handling I0311 12:01:33.992141 6 log.go:172] (0xc001db4460) (1) Data frame sent I0311 12:01:33.992150 6 log.go:172] (0xc000df6fd0) (0xc001db4460) Stream removed, broadcasting: 1 I0311 12:01:33.992215 6 log.go:172] (0xc000df6fd0) Go away received I0311 12:01:33.992249 6 log.go:172] (0xc000df6fd0) (0xc001db4460) Stream removed, broadcasting: 1 I0311 12:01:33.992276 6 log.go:172] (0xc000df6fd0) (0xc00084c1e0) Stream removed, broadcasting: 3 I0311 12:01:33.992293 6 log.go:172] (0xc000df6fd0) (0xc001ac2000) Stream removed, broadcasting: 5 Mar 11 12:01:33.992: INFO: Found all expected endpoints: [netserver-0] Mar 11 12:01:33.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.216:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cppt2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 12:01:33.995: INFO: >>> kubeConfig: /root/.kube/config I0311 12:01:34.023222 6 log.go:172] (0xc000d9a790) (0xc00084c500) Create stream I0311 12:01:34.023245 6 log.go:172] (0xc000d9a790) (0xc00084c500) Stream added, broadcasting: 1 I0311 12:01:34.025032 6 log.go:172] (0xc000d9a790) Reply frame received for 1 I0311 12:01:34.025071 6 log.go:172] (0xc000d9a790) (0xc00084c5a0) Create stream I0311 12:01:34.025081 6 log.go:172] (0xc000d9a790) (0xc00084c5a0) Stream added, broadcasting: 3 I0311 12:01:34.025790 6 log.go:172] (0xc000d9a790) Reply frame received for 3 I0311 12:01:34.025819 6 log.go:172] (0xc000d9a790) (0xc00248c000) Create stream I0311 12:01:34.025830 6 log.go:172] (0xc000d9a790) (0xc00248c000) Stream added, broadcasting: 5 I0311 12:01:34.026586 6 log.go:172] (0xc000d9a790) Reply frame received for 5 I0311 12:01:34.088486 6 log.go:172] (0xc000d9a790) Data frame received for 5 I0311 12:01:34.088531 6 log.go:172] (0xc00248c000) (5) Data frame handling I0311 12:01:34.088555 6 log.go:172] (0xc000d9a790) Data frame received for 3 I0311 12:01:34.088564 6 log.go:172] (0xc00084c5a0) (3) Data frame handling I0311 12:01:34.088574 6 log.go:172] (0xc00084c5a0) (3) Data frame sent I0311 12:01:34.088583 6 log.go:172] (0xc000d9a790) Data frame received for 3 I0311 12:01:34.088590 6 log.go:172] (0xc00084c5a0) (3) Data frame handling I0311 12:01:34.089898 6 log.go:172] (0xc000d9a790) Data frame received for 1 I0311 12:01:34.089921 6 log.go:172] (0xc00084c500) (1) Data frame handling I0311 12:01:34.089940 6 log.go:172] (0xc00084c500) (1) Data frame sent I0311 12:01:34.089952 6 log.go:172] (0xc000d9a790) (0xc00084c500) Stream removed, broadcasting: 1 I0311 12:01:34.089964 6 log.go:172] (0xc000d9a790) Go away received I0311 12:01:34.090186 6 log.go:172] (0xc000d9a790) (0xc00084c500) Stream removed, broadcasting: 1 I0311 12:01:34.090210 6 log.go:172] (0xc000d9a790) (0xc00084c5a0) Stream removed, broadcasting: 3 I0311 12:01:34.090225 6 log.go:172] (0xc000d9a790) (0xc00248c000) Stream removed, broadcasting: 5 Mar 11 12:01:34.090: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:01:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cppt2" for this suite. Mar 11 12:01:56.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:01:56.147: INFO: namespace: e2e-tests-pod-network-test-cppt2, resource: bindings, ignored listing per whitelist Mar 11 12:01:56.228: INFO: namespace e2e-tests-pod-network-test-cppt2 deletion completed in 22.134646276s • [SLOW TEST:44.569 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:01:56.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 11 12:01:56.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513393,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 12:01:56.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513393,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 11 12:02:06.341: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513413,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 11 12:02:06.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513413,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 11 12:02:16.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513433,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 12:02:16.347: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513433,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 11 12:02:26.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513453,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 12:02:26.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-a,UID:1aba24a4-6390-11ea-9978-0242ac11000d,ResourceVersion:513453,Generation:0,CreationTimestamp:2020-03-11 12:01:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 11 12:02:36.375: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-b,UID:329b7f7a-6390-11ea-9978-0242ac11000d,ResourceVersion:513473,Generation:0,CreationTimestamp:2020-03-11 12:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 12:02:36.375: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-b,UID:329b7f7a-6390-11ea-9978-0242ac11000d,ResourceVersion:513473,Generation:0,CreationTimestamp:2020-03-11 12:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 11 12:02:46.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-b,UID:329b7f7a-6390-11ea-9978-0242ac11000d,ResourceVersion:513494,Generation:0,CreationTimestamp:2020-03-11 12:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 12:02:46.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-d64c6,SelfLink:/api/v1/namespaces/e2e-tests-watch-d64c6/configmaps/e2e-watch-test-configmap-b,UID:329b7f7a-6390-11ea-9978-0242ac11000d,ResourceVersion:513494,Generation:0,CreationTimestamp:2020-03-11 12:02:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:02:56.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-d64c6" for this suite. Mar 11 12:03:02.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:03:02.475: INFO: namespace: e2e-tests-watch-d64c6, resource: bindings, ignored listing per whitelist Mar 11 12:03:02.520: INFO: namespace e2e-tests-watch-d64c6 deletion completed in 6.135743366s • [SLOW TEST:66.292 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:03:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 11 12:03:02.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 12:03:02.627: INFO: Waiting for terminating namespaces to be deleted... Mar 11 12:03:02.630: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 11 12:03:02.635: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:03:02.635: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 12:03:02.635: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:03:02.635: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 12:03:02.635: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 11 12:03:02.640: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:03:02.640: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 12:03:02.640: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:03:02.640: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fb3e4a44bfb028], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:03:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9btkv" for this suite. Mar 11 12:03:09.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:03:09.743: INFO: namespace: e2e-tests-sched-pred-9btkv, resource: bindings, ignored listing per whitelist Mar 11 12:03:09.747: INFO: namespace e2e-tests-sched-pred-9btkv deletion completed in 6.086825s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.227 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:03:09.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 11 12:03:09.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:10.064: INFO: stderr: "" Mar 11 12:03:10.064: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:03:10.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:10.182: INFO: stderr: "" Mar 11 12:03:10.182: INFO: stdout: "update-demo-nautilus-gwwcl update-demo-nautilus-tgjfk " Mar 11 12:03:10.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gwwcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:10.264: INFO: stderr: "" Mar 11 12:03:10.264: INFO: stdout: "" Mar 11 12:03:10.264: INFO: update-demo-nautilus-gwwcl is created but not running Mar 11 12:03:15.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:15.394: INFO: stderr: "" Mar 11 12:03:15.394: INFO: stdout: "update-demo-nautilus-gwwcl update-demo-nautilus-tgjfk " Mar 11 12:03:15.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gwwcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:15.497: INFO: stderr: "" Mar 11 12:03:15.497: INFO: stdout: "true" Mar 11 12:03:15.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gwwcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:15.578: INFO: stderr: "" Mar 11 12:03:15.578: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:03:15.578: INFO: validating pod update-demo-nautilus-gwwcl Mar 11 12:03:15.588: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:03:15.588: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:03:15.588: INFO: update-demo-nautilus-gwwcl is verified up and running Mar 11 12:03:15.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgjfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:15.683: INFO: stderr: "" Mar 11 12:03:15.683: INFO: stdout: "true" Mar 11 12:03:15.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgjfk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:15.762: INFO: stderr: "" Mar 11 12:03:15.762: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:03:15.762: INFO: validating pod update-demo-nautilus-tgjfk Mar 11 12:03:15.765: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:03:15.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:03:15.765: INFO: update-demo-nautilus-tgjfk is verified up and running STEP: rolling-update to new replication controller Mar 11 12:03:15.767: INFO: scanned /root for discovery docs: Mar 11 12:03:15.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.410: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 11 12:03:38.410: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:03:38.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.488: INFO: stderr: "" Mar 11 12:03:38.488: INFO: stdout: "update-demo-kitten-dw4sv update-demo-kitten-lxps4 " Mar 11 12:03:38.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dw4sv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.558: INFO: stderr: "" Mar 11 12:03:38.558: INFO: stdout: "true" Mar 11 12:03:38.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dw4sv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.618: INFO: stderr: "" Mar 11 12:03:38.618: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 11 12:03:38.618: INFO: validating pod update-demo-kitten-dw4sv Mar 11 12:03:38.621: INFO: got data: { "image": "kitten.jpg" } Mar 11 12:03:38.621: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 11 12:03:38.621: INFO: update-demo-kitten-dw4sv is verified up and running Mar 11 12:03:38.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lxps4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.703: INFO: stderr: "" Mar 11 12:03:38.703: INFO: stdout: "true" Mar 11 12:03:38.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lxps4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8c4dc' Mar 11 12:03:38.771: INFO: stderr: "" Mar 11 12:03:38.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 11 12:03:38.771: INFO: validating pod update-demo-kitten-lxps4 Mar 11 12:03:38.774: INFO: got data: { "image": "kitten.jpg" } Mar 11 12:03:38.774: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 11 12:03:38.774: INFO: update-demo-kitten-lxps4 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:03:38.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8c4dc" for this suite. Mar 11 12:04:00.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:04:00.850: INFO: namespace: e2e-tests-kubectl-8c4dc, resource: bindings, ignored listing per whitelist Mar 11 12:04:00.870: INFO: namespace e2e-tests-kubectl-8c4dc deletion completed in 22.094375454s • [SLOW TEST:51.123 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:04:00.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 12:04:00.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-ddfm4" to be "success or failure" Mar 11 12:04:00.961: INFO: Pod "downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.558337ms Mar 11 12:04:02.964: INFO: Pod "downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908346s Mar 11 12:04:04.968: INFO: Pod "downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011663659s STEP: Saw pod success Mar 11 12:04:04.968: INFO: Pod "downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:04:04.971: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 12:04:04.992: INFO: Waiting for pod downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:04:05.006: INFO: Pod downwardapi-volume-6505334a-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:04:05.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ddfm4" for this suite. Mar 11 12:04:11.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:04:11.078: INFO: namespace: e2e-tests-downward-api-ddfm4, resource: bindings, ignored listing per whitelist Mar 11 12:04:11.097: INFO: namespace e2e-tests-downward-api-ddfm4 deletion completed in 6.08724935s • [SLOW TEST:10.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:04:11.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 11 12:04:11.191: INFO: Waiting up to 5m0s for pod "client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-containers-9mxfc" to be "success or failure" Mar 11 12:04:11.195: INFO: Pod "client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454235ms Mar 11 12:04:13.199: INFO: Pod "client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008360264s STEP: Saw pod success Mar 11 12:04:13.199: INFO: Pod "client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:04:13.201: INFO: Trying to get logs from node hunter-worker pod client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 12:04:13.219: INFO: Waiting for pod client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:04:13.223: INFO: Pod client-containers-6b1ee28b-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:04:13.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9mxfc" for this suite. Mar 11 12:04:19.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:04:19.298: INFO: namespace: e2e-tests-containers-9mxfc, resource: bindings, ignored listing per whitelist Mar 11 12:04:19.313: INFO: namespace e2e-tests-containers-9mxfc deletion completed in 6.086471512s • [SLOW TEST:8.216 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:04:19.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 11 12:04:19.403: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 11 12:04:24.407: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:04:25.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lwt8g" for this suite. Mar 11 12:04:31.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:04:31.503: INFO: namespace: e2e-tests-replication-controller-lwt8g, resource: bindings, ignored listing per whitelist Mar 11 12:04:31.520: INFO: namespace e2e-tests-replication-controller-lwt8g deletion completed in 6.085842564s • [SLOW TEST:12.208 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:04:31.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 11 12:04:31.644: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:31.647: INFO: Number of nodes with available pods: 0 Mar 11 12:04:31.647: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:32.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:32.654: INFO: Number of nodes with available pods: 0 Mar 11 12:04:32.654: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:33.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:33.654: INFO: Number of nodes with available pods: 2 Mar 11 12:04:33.654: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 11 12:04:33.674: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:33.676: INFO: Number of nodes with available pods: 1 Mar 11 12:04:33.676: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:34.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:34.682: INFO: Number of nodes with available pods: 1 Mar 11 12:04:34.682: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:35.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:35.685: INFO: Number of nodes with available pods: 1 Mar 11 12:04:35.685: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:36.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:36.685: INFO: Number of nodes with available pods: 1 Mar 11 12:04:36.685: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:37.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:37.692: INFO: Number of nodes with available pods: 1 Mar 11 12:04:37.692: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:38.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:38.684: INFO: Number of nodes with available pods: 1 Mar 11 12:04:38.684: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:39.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:39.685: INFO: Number of nodes with available pods: 1 Mar 11 12:04:39.685: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:40.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:40.683: INFO: Number of nodes with available pods: 1 Mar 11 12:04:40.683: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:41.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:41.684: INFO: Number of nodes with available pods: 1 Mar 11 12:04:41.684: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:42.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:42.682: INFO: Number of nodes with available pods: 1 Mar 11 12:04:42.682: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:43.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:43.683: INFO: Number of nodes with available pods: 1 Mar 11 12:04:43.683: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:44.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:44.685: INFO: Number of nodes with available pods: 1 Mar 11 12:04:44.685: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:45.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:45.685: INFO: Number of nodes with available pods: 1 Mar 11 12:04:45.685: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:46.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:46.684: INFO: Number of nodes with available pods: 1 Mar 11 12:04:46.684: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:47.681: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:47.684: INFO: Number of nodes with available pods: 1 Mar 11 12:04:47.684: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:48.680: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:48.683: INFO: Number of nodes with available pods: 1 Mar 11 12:04:48.683: INFO: Node hunter-worker is running more than one daemon pod Mar 11 12:04:49.682: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 12:04:49.685: INFO: Number of nodes with available pods: 2 Mar 11 12:04:49.685: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-67psp, will wait for the garbage collector to delete the pods Mar 11 12:04:49.746: INFO: Deleting DaemonSet.extensions daemon-set took: 5.902833ms Mar 11 12:04:49.847: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.255072ms Mar 11 12:04:58.050: INFO: Number of nodes with available pods: 0 Mar 11 12:04:58.050: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 12:04:58.053: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-67psp/daemonsets","resourceVersion":"514046"},"items":null} Mar 11 12:04:58.055: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-67psp/pods","resourceVersion":"514046"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:04:58.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-67psp" for this suite. Mar 11 12:05:04.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:05:04.188: INFO: namespace: e2e-tests-daemonsets-67psp, resource: bindings, ignored listing per whitelist Mar 11 12:05:04.192: INFO: namespace e2e-tests-daemonsets-67psp deletion completed in 6.124908029s • [SLOW TEST:32.671 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:05:04.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-gf6hq I0311 12:05:04.300229 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-gf6hq, replica count: 1 I0311 12:05:05.350600 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 12:05:06.350797 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 12:05:06.480: INFO: Created: latency-svc-2d7f5 Mar 11 12:05:06.491: INFO: Got endpoints: latency-svc-2d7f5 [39.994248ms] Mar 11 12:05:06.567: INFO: Created: latency-svc-zq4bh Mar 11 12:05:06.570: INFO: Got endpoints: latency-svc-zq4bh [79.490437ms] Mar 11 12:05:06.597: INFO: Created: latency-svc-l78qm Mar 11 12:05:06.603: INFO: Got endpoints: latency-svc-l78qm [111.934018ms] Mar 11 12:05:06.620: INFO: Created: latency-svc-c7kqs Mar 11 12:05:06.627: INFO: Got endpoints: latency-svc-c7kqs [135.657599ms] Mar 11 12:05:06.647: INFO: Created: latency-svc-4sqpf Mar 11 12:05:06.656: INFO: Got endpoints: latency-svc-4sqpf [165.131096ms] Mar 11 12:05:06.698: INFO: Created: latency-svc-7vglw Mar 11 12:05:06.700: INFO: Got endpoints: latency-svc-7vglw [208.720075ms] Mar 11 12:05:06.725: INFO: Created: latency-svc-lfk4s Mar 11 12:05:06.730: INFO: Got endpoints: latency-svc-lfk4s [238.084058ms] Mar 11 12:05:06.749: INFO: Created: latency-svc-hwbgk Mar 11 12:05:06.755: INFO: Got endpoints: latency-svc-hwbgk [262.489057ms] Mar 11 12:05:06.788: INFO: Created: latency-svc-dj2z8 Mar 11 12:05:06.797: INFO: Got endpoints: latency-svc-dj2z8 [304.200592ms] Mar 11 12:05:06.843: INFO: Created: latency-svc-kqfbk Mar 11 12:05:06.857: INFO: Got endpoints: latency-svc-kqfbk [364.840276ms] Mar 11 12:05:06.875: INFO: Created: latency-svc-r9xpr Mar 11 12:05:06.882: INFO: Got endpoints: latency-svc-r9xpr [388.539235ms] Mar 11 12:05:06.923: INFO: Created: latency-svc-xl8sn Mar 11 12:05:06.930: INFO: Got endpoints: latency-svc-xl8sn [435.934857ms] Mar 11 12:05:06.967: INFO: Created: latency-svc-cd5kh Mar 11 12:05:06.970: INFO: Got endpoints: latency-svc-cd5kh [476.169736ms] Mar 11 12:05:06.993: INFO: Created: latency-svc-tpjnd Mar 11 12:05:07.009: INFO: Got endpoints: latency-svc-tpjnd [515.080549ms] Mar 11 12:05:07.029: INFO: Created: latency-svc-6wgrb Mar 11 12:05:07.033: INFO: Got endpoints: latency-svc-6wgrb [538.973457ms] Mar 11 12:05:07.053: INFO: Created: latency-svc-7g24n Mar 11 12:05:07.099: INFO: Got endpoints: latency-svc-7g24n [605.988315ms] Mar 11 12:05:07.121: INFO: Created: latency-svc-h4bzs Mar 11 12:05:07.129: INFO: Got endpoints: latency-svc-h4bzs [558.824786ms] Mar 11 12:05:07.153: INFO: Created: latency-svc-xsrtv Mar 11 12:05:07.154: INFO: Got endpoints: latency-svc-xsrtv [550.949678ms] Mar 11 12:05:07.179: INFO: Created: latency-svc-jjzpv Mar 11 12:05:07.196: INFO: Got endpoints: latency-svc-jjzpv [568.981388ms] Mar 11 12:05:07.249: INFO: Created: latency-svc-5n4qf Mar 11 12:05:07.252: INFO: Got endpoints: latency-svc-5n4qf [595.200646ms] Mar 11 12:05:07.287: INFO: Created: latency-svc-ph4jp Mar 11 12:05:07.292: INFO: Got endpoints: latency-svc-ph4jp [591.642968ms] Mar 11 12:05:07.341: INFO: Created: latency-svc-2vq4t Mar 11 12:05:07.380: INFO: Got endpoints: latency-svc-2vq4t [650.369415ms] Mar 11 12:05:07.397: INFO: Created: latency-svc-5bmjm Mar 11 12:05:07.407: INFO: Got endpoints: latency-svc-5bmjm [652.270855ms] Mar 11 12:05:07.427: INFO: Created: latency-svc-d2hpm Mar 11 12:05:07.431: INFO: Got endpoints: latency-svc-d2hpm [634.1263ms] Mar 11 12:05:07.452: INFO: Created: latency-svc-sjbsx Mar 11 12:05:07.456: INFO: Got endpoints: latency-svc-sjbsx [598.484845ms] Mar 11 12:05:07.480: INFO: Created: latency-svc-z8hgk Mar 11 12:05:07.518: INFO: Got endpoints: latency-svc-z8hgk [636.514982ms] Mar 11 12:05:07.520: INFO: Created: latency-svc-km76l Mar 11 12:05:07.528: INFO: Got endpoints: latency-svc-km76l [598.437093ms] Mar 11 12:05:07.551: INFO: Created: latency-svc-gbbkn Mar 11 12:05:07.558: INFO: Got endpoints: latency-svc-gbbkn [588.906973ms] Mar 11 12:05:07.589: INFO: Created: latency-svc-g27vl Mar 11 12:05:07.595: INFO: Got endpoints: latency-svc-g27vl [585.9611ms] Mar 11 12:05:07.614: INFO: Created: latency-svc-87q78 Mar 11 12:05:07.650: INFO: Got endpoints: latency-svc-87q78 [617.118428ms] Mar 11 12:05:07.671: INFO: Created: latency-svc-jnl89 Mar 11 12:05:07.679: INFO: Got endpoints: latency-svc-jnl89 [580.58396ms] Mar 11 12:05:07.733: INFO: Created: latency-svc-gwsdr Mar 11 12:05:07.745: INFO: Got endpoints: latency-svc-gwsdr [616.029262ms] Mar 11 12:05:07.785: INFO: Created: latency-svc-tl7pn Mar 11 12:05:07.794: INFO: Got endpoints: latency-svc-tl7pn [639.841229ms] Mar 11 12:05:07.815: INFO: Created: latency-svc-6msjt Mar 11 12:05:07.824: INFO: Got endpoints: latency-svc-6msjt [627.897436ms] Mar 11 12:05:07.846: INFO: Created: latency-svc-9sgxt Mar 11 12:05:07.855: INFO: Got endpoints: latency-svc-9sgxt [602.93926ms] Mar 11 12:05:07.877: INFO: Created: latency-svc-zv2gx Mar 11 12:05:07.937: INFO: Got endpoints: latency-svc-zv2gx [645.326218ms] Mar 11 12:05:07.940: INFO: Created: latency-svc-rpnhg Mar 11 12:05:07.954: INFO: Got endpoints: latency-svc-rpnhg [573.498494ms] Mar 11 12:05:07.974: INFO: Created: latency-svc-xskc9 Mar 11 12:05:07.977: INFO: Got endpoints: latency-svc-xskc9 [569.62972ms] Mar 11 12:05:08.001: INFO: Created: latency-svc-jzwxp Mar 11 12:05:08.006: INFO: Got endpoints: latency-svc-jzwxp [574.558319ms] Mar 11 12:05:08.025: INFO: Created: latency-svc-tbrvq Mar 11 12:05:08.030: INFO: Got endpoints: latency-svc-tbrvq [574.346906ms] Mar 11 12:05:08.088: INFO: Created: latency-svc-sjlrl Mar 11 12:05:08.096: INFO: Got endpoints: latency-svc-sjlrl [577.590254ms] Mar 11 12:05:08.124: INFO: Created: latency-svc-46zfj Mar 11 12:05:08.128: INFO: Got endpoints: latency-svc-46zfj [599.486439ms] Mar 11 12:05:08.169: INFO: Created: latency-svc-74dfs Mar 11 12:05:08.175: INFO: Got endpoints: latency-svc-74dfs [616.365908ms] Mar 11 12:05:08.231: INFO: Created: latency-svc-46xzf Mar 11 12:05:08.234: INFO: Got endpoints: latency-svc-46xzf [638.985727ms] Mar 11 12:05:08.256: INFO: Created: latency-svc-w8qkt Mar 11 12:05:08.259: INFO: Got endpoints: latency-svc-w8qkt [609.52119ms] Mar 11 12:05:08.286: INFO: Created: latency-svc-f572r Mar 11 12:05:08.290: INFO: Got endpoints: latency-svc-f572r [610.869249ms] Mar 11 12:05:08.307: INFO: Created: latency-svc-fpbvv Mar 11 12:05:08.314: INFO: Got endpoints: latency-svc-fpbvv [568.771912ms] Mar 11 12:05:08.369: INFO: Created: latency-svc-mfwz4 Mar 11 12:05:08.397: INFO: Got endpoints: latency-svc-mfwz4 [602.881664ms] Mar 11 12:05:08.398: INFO: Created: latency-svc-2vrh9 Mar 11 12:05:08.405: INFO: Got endpoints: latency-svc-2vrh9 [580.421905ms] Mar 11 12:05:08.423: INFO: Created: latency-svc-jwsqs Mar 11 12:05:08.429: INFO: Got endpoints: latency-svc-jwsqs [574.328045ms] Mar 11 12:05:08.448: INFO: Created: latency-svc-7rv6p Mar 11 12:05:08.453: INFO: Got endpoints: latency-svc-7rv6p [515.766502ms] Mar 11 12:05:08.513: INFO: Created: latency-svc-rczxr Mar 11 12:05:08.515: INFO: Got endpoints: latency-svc-rczxr [561.366198ms] Mar 11 12:05:08.547: INFO: Created: latency-svc-czdw2 Mar 11 12:05:08.556: INFO: Got endpoints: latency-svc-czdw2 [579.225119ms] Mar 11 12:05:08.578: INFO: Created: latency-svc-g6jfm Mar 11 12:05:08.586: INFO: Got endpoints: latency-svc-g6jfm [580.338108ms] Mar 11 12:05:08.656: INFO: Created: latency-svc-x57d2 Mar 11 12:05:08.659: INFO: Got endpoints: latency-svc-x57d2 [628.320457ms] Mar 11 12:05:08.683: INFO: Created: latency-svc-hnwch Mar 11 12:05:08.685: INFO: Got endpoints: latency-svc-hnwch [589.435367ms] Mar 11 12:05:08.712: INFO: Created: latency-svc-7hfzr Mar 11 12:05:08.733: INFO: Got endpoints: latency-svc-7hfzr [604.960949ms] Mar 11 12:05:08.734: INFO: Created: latency-svc-trmrv Mar 11 12:05:08.737: INFO: Got endpoints: latency-svc-trmrv [562.045319ms] Mar 11 12:05:08.801: INFO: Created: latency-svc-tm8bn Mar 11 12:05:08.802: INFO: Got endpoints: latency-svc-tm8bn [568.421464ms] Mar 11 12:05:08.850: INFO: Created: latency-svc-kxdsm Mar 11 12:05:08.858: INFO: Got endpoints: latency-svc-kxdsm [598.584389ms] Mar 11 12:05:08.938: INFO: Created: latency-svc-mzfhf Mar 11 12:05:08.940: INFO: Got endpoints: latency-svc-mzfhf [649.947446ms] Mar 11 12:05:08.980: INFO: Created: latency-svc-brgch Mar 11 12:05:08.986: INFO: Got endpoints: latency-svc-brgch [671.788651ms] Mar 11 12:05:09.015: INFO: Created: latency-svc-lnhpb Mar 11 12:05:09.021: INFO: Got endpoints: latency-svc-lnhpb [623.700734ms] Mar 11 12:05:09.081: INFO: Created: latency-svc-rwpcv Mar 11 12:05:09.084: INFO: Got endpoints: latency-svc-rwpcv [679.075214ms] Mar 11 12:05:09.126: INFO: Created: latency-svc-xrxth Mar 11 12:05:09.130: INFO: Got endpoints: latency-svc-xrxth [700.637933ms] Mar 11 12:05:09.153: INFO: Created: latency-svc-2jhb6 Mar 11 12:05:09.160: INFO: Got endpoints: latency-svc-2jhb6 [706.35654ms] Mar 11 12:05:09.178: INFO: Created: latency-svc-2ft7x Mar 11 12:05:09.231: INFO: Got endpoints: latency-svc-2ft7x [715.346709ms] Mar 11 12:05:09.243: INFO: Created: latency-svc-6zqrz Mar 11 12:05:09.250: INFO: Got endpoints: latency-svc-6zqrz [694.051138ms] Mar 11 12:05:09.270: INFO: Created: latency-svc-mrp2f Mar 11 12:05:09.293: INFO: Got endpoints: latency-svc-mrp2f [706.704057ms] Mar 11 12:05:09.312: INFO: Created: latency-svc-6twv9 Mar 11 12:05:09.317: INFO: Got endpoints: latency-svc-6twv9 [658.099561ms] Mar 11 12:05:09.399: INFO: Created: latency-svc-5glnb Mar 11 12:05:09.401: INFO: Got endpoints: latency-svc-5glnb [715.653638ms] Mar 11 12:05:09.462: INFO: Created: latency-svc-fw72z Mar 11 12:05:09.468: INFO: Got endpoints: latency-svc-fw72z [735.042424ms] Mar 11 12:05:09.486: INFO: Created: latency-svc-8wszp Mar 11 12:05:09.554: INFO: Got endpoints: latency-svc-8wszp [817.268167ms] Mar 11 12:05:09.556: INFO: Created: latency-svc-j96cj Mar 11 12:05:09.579: INFO: Got endpoints: latency-svc-j96cj [776.77462ms] Mar 11 12:05:09.607: INFO: Created: latency-svc-hj6v7 Mar 11 12:05:09.612: INFO: Got endpoints: latency-svc-hj6v7 [753.872425ms] Mar 11 12:05:09.642: INFO: Created: latency-svc-jq74q Mar 11 12:05:09.644: INFO: Got endpoints: latency-svc-jq74q [703.713616ms] Mar 11 12:05:09.704: INFO: Created: latency-svc-g27m2 Mar 11 12:05:09.706: INFO: Got endpoints: latency-svc-g27m2 [720.458517ms] Mar 11 12:05:09.753: INFO: Created: latency-svc-6tnf6 Mar 11 12:05:09.769: INFO: Got endpoints: latency-svc-6tnf6 [748.594131ms] Mar 11 12:05:09.797: INFO: Created: latency-svc-hlhf2 Mar 11 12:05:09.860: INFO: Got endpoints: latency-svc-hlhf2 [775.813294ms] Mar 11 12:05:09.862: INFO: Created: latency-svc-85fnl Mar 11 12:05:09.873: INFO: Got endpoints: latency-svc-85fnl [743.138758ms] Mar 11 12:05:09.900: INFO: Created: latency-svc-v25vz Mar 11 12:05:09.921: INFO: Got endpoints: latency-svc-v25vz [761.16065ms] Mar 11 12:05:09.921: INFO: Created: latency-svc-lxnvm Mar 11 12:05:09.926: INFO: Got endpoints: latency-svc-lxnvm [695.434254ms] Mar 11 12:05:09.945: INFO: Created: latency-svc-zgc65 Mar 11 12:05:09.950: INFO: Got endpoints: latency-svc-zgc65 [700.290303ms] Mar 11 12:05:10.003: INFO: Created: latency-svc-5mb7c Mar 11 12:05:10.031: INFO: Got endpoints: latency-svc-5mb7c [738.616936ms] Mar 11 12:05:10.032: INFO: Created: latency-svc-qsfld Mar 11 12:05:10.041: INFO: Got endpoints: latency-svc-qsfld [724.390034ms] Mar 11 12:05:10.062: INFO: Created: latency-svc-t4plj Mar 11 12:05:10.066: INFO: Got endpoints: latency-svc-t4plj [664.627165ms] Mar 11 12:05:10.084: INFO: Created: latency-svc-7klhd Mar 11 12:05:10.090: INFO: Got endpoints: latency-svc-7klhd [621.778269ms] Mar 11 12:05:10.141: INFO: Created: latency-svc-5z76n Mar 11 12:05:10.156: INFO: Got endpoints: latency-svc-5z76n [602.004183ms] Mar 11 12:05:10.200: INFO: Created: latency-svc-xscqj Mar 11 12:05:10.204: INFO: Got endpoints: latency-svc-xscqj [625.054014ms] Mar 11 12:05:10.224: INFO: Created: latency-svc-jh7ln Mar 11 12:05:10.229: INFO: Got endpoints: latency-svc-jh7ln [617.104903ms] Mar 11 12:05:10.291: INFO: Created: latency-svc-2vdg8 Mar 11 12:05:10.293: INFO: Got endpoints: latency-svc-2vdg8 [648.806373ms] Mar 11 12:05:10.327: INFO: Created: latency-svc-8kqwb Mar 11 12:05:10.332: INFO: Got endpoints: latency-svc-8kqwb [625.14784ms] Mar 11 12:05:10.350: INFO: Created: latency-svc-p4rww Mar 11 12:05:10.355: INFO: Got endpoints: latency-svc-p4rww [586.033265ms] Mar 11 12:05:10.374: INFO: Created: latency-svc-4wfc4 Mar 11 12:05:10.380: INFO: Got endpoints: latency-svc-4wfc4 [520.243778ms] Mar 11 12:05:10.441: INFO: Created: latency-svc-qx6r6 Mar 11 12:05:10.467: INFO: Got endpoints: latency-svc-qx6r6 [594.226988ms] Mar 11 12:05:10.468: INFO: Created: latency-svc-67xvn Mar 11 12:05:10.477: INFO: Got endpoints: latency-svc-67xvn [555.663121ms] Mar 11 12:05:10.494: INFO: Created: latency-svc-qdbh2 Mar 11 12:05:10.501: INFO: Got endpoints: latency-svc-qdbh2 [574.449577ms] Mar 11 12:05:10.518: INFO: Created: latency-svc-m8ghv Mar 11 12:05:10.536: INFO: Got endpoints: latency-svc-m8ghv [585.758389ms] Mar 11 12:05:10.537: INFO: Created: latency-svc-kx6dk Mar 11 12:05:10.584: INFO: Got endpoints: latency-svc-kx6dk [552.563127ms] Mar 11 12:05:10.584: INFO: Created: latency-svc-cc6v5 Mar 11 12:05:10.586: INFO: Got endpoints: latency-svc-cc6v5 [544.963949ms] Mar 11 12:05:10.612: INFO: Created: latency-svc-p6njb Mar 11 12:05:10.616: INFO: Got endpoints: latency-svc-p6njb [550.339753ms] Mar 11 12:05:10.636: INFO: Created: latency-svc-hx7c8 Mar 11 12:05:10.641: INFO: Got endpoints: latency-svc-hx7c8 [550.875925ms] Mar 11 12:05:10.662: INFO: Created: latency-svc-xbr5s Mar 11 12:05:10.671: INFO: Got endpoints: latency-svc-xbr5s [514.117152ms] Mar 11 12:05:10.728: INFO: Created: latency-svc-jz7fn Mar 11 12:05:10.729: INFO: Got endpoints: latency-svc-jz7fn [525.21686ms] Mar 11 12:05:10.776: INFO: Created: latency-svc-vw6pj Mar 11 12:05:10.791: INFO: Got endpoints: latency-svc-vw6pj [561.916727ms] Mar 11 12:05:10.809: INFO: Created: latency-svc-zf6bx Mar 11 12:05:10.815: INFO: Got endpoints: latency-svc-zf6bx [522.287792ms] Mar 11 12:05:10.884: INFO: Created: latency-svc-5h86d Mar 11 12:05:10.885: INFO: Got endpoints: latency-svc-5h86d [553.503427ms] Mar 11 12:05:10.917: INFO: Created: latency-svc-bzfvm Mar 11 12:05:10.924: INFO: Got endpoints: latency-svc-bzfvm [568.691067ms] Mar 11 12:05:10.942: INFO: Created: latency-svc-tpzp9 Mar 11 12:05:10.948: INFO: Got endpoints: latency-svc-tpzp9 [568.380407ms] Mar 11 12:05:10.965: INFO: Created: latency-svc-4zv4d Mar 11 12:05:10.973: INFO: Got endpoints: latency-svc-4zv4d [505.570271ms] Mar 11 12:05:11.027: INFO: Created: latency-svc-v2h7s Mar 11 12:05:11.029: INFO: Got endpoints: latency-svc-v2h7s [552.894027ms] Mar 11 12:05:11.058: INFO: Created: latency-svc-t7jbc Mar 11 12:05:11.070: INFO: Got endpoints: latency-svc-t7jbc [568.842786ms] Mar 11 12:05:11.088: INFO: Created: latency-svc-xktmr Mar 11 12:05:11.094: INFO: Got endpoints: latency-svc-xktmr [557.277428ms] Mar 11 12:05:11.115: INFO: Created: latency-svc-wwhm9 Mar 11 12:05:11.124: INFO: Got endpoints: latency-svc-wwhm9 [539.792555ms] Mar 11 12:05:11.165: INFO: Created: latency-svc-bdhs2 Mar 11 12:05:11.172: INFO: Got endpoints: latency-svc-bdhs2 [586.079416ms] Mar 11 12:05:11.202: INFO: Created: latency-svc-pwgrk Mar 11 12:05:11.215: INFO: Got endpoints: latency-svc-pwgrk [598.399902ms] Mar 11 12:05:11.238: INFO: Created: latency-svc-j77x7 Mar 11 12:05:11.254: INFO: Got endpoints: latency-svc-j77x7 [612.947958ms] Mar 11 12:05:11.302: INFO: Created: latency-svc-t4mwh Mar 11 12:05:11.332: INFO: Got endpoints: latency-svc-t4mwh [661.181169ms] Mar 11 12:05:11.352: INFO: Created: latency-svc-t7b9p Mar 11 12:05:11.354: INFO: Got endpoints: latency-svc-t7b9p [625.018028ms] Mar 11 12:05:11.376: INFO: Created: latency-svc-f6nz7 Mar 11 12:05:11.378: INFO: Got endpoints: latency-svc-f6nz7 [586.661345ms] Mar 11 12:05:11.446: INFO: Created: latency-svc-nkbbk Mar 11 12:05:11.476: INFO: Got endpoints: latency-svc-nkbbk [660.371262ms] Mar 11 12:05:11.479: INFO: Created: latency-svc-nc8x9 Mar 11 12:05:11.480: INFO: Got endpoints: latency-svc-nc8x9 [595.053674ms] Mar 11 12:05:11.514: INFO: Created: latency-svc-n74tp Mar 11 12:05:11.523: INFO: Got endpoints: latency-svc-n74tp [599.10074ms] Mar 11 12:05:11.545: INFO: Created: latency-svc-4xfx2 Mar 11 12:05:11.596: INFO: Got endpoints: latency-svc-4xfx2 [647.430613ms] Mar 11 12:05:11.597: INFO: Created: latency-svc-w9skd Mar 11 12:05:11.601: INFO: Got endpoints: latency-svc-w9skd [628.255771ms] Mar 11 12:05:11.626: INFO: Created: latency-svc-7md7x Mar 11 12:05:11.631: INFO: Got endpoints: latency-svc-7md7x [601.876602ms] Mar 11 12:05:11.651: INFO: Created: latency-svc-jtzm6 Mar 11 12:05:11.656: INFO: Got endpoints: latency-svc-jtzm6 [586.314859ms] Mar 11 12:05:11.677: INFO: Created: latency-svc-28krg Mar 11 12:05:11.680: INFO: Got endpoints: latency-svc-28krg [586.380063ms] Mar 11 12:05:11.733: INFO: Created: latency-svc-jlcvm Mar 11 12:05:11.737: INFO: Got endpoints: latency-svc-jlcvm [612.824181ms] Mar 11 12:05:11.758: INFO: Created: latency-svc-26bgs Mar 11 12:05:11.765: INFO: Got endpoints: latency-svc-26bgs [592.380319ms] Mar 11 12:05:11.783: INFO: Created: latency-svc-vqb4j Mar 11 12:05:11.789: INFO: Got endpoints: latency-svc-vqb4j [574.364084ms] Mar 11 12:05:11.808: INFO: Created: latency-svc-7jwk2 Mar 11 12:05:11.813: INFO: Got endpoints: latency-svc-7jwk2 [559.727054ms] Mar 11 12:05:11.832: INFO: Created: latency-svc-z4jzb Mar 11 12:05:11.889: INFO: Got endpoints: latency-svc-z4jzb [557.511435ms] Mar 11 12:05:11.891: INFO: Created: latency-svc-rbh99 Mar 11 12:05:11.898: INFO: Got endpoints: latency-svc-rbh99 [543.651437ms] Mar 11 12:05:11.920: INFO: Created: latency-svc-6hqlp Mar 11 12:05:11.944: INFO: Got endpoints: latency-svc-6hqlp [565.652558ms] Mar 11 12:05:11.969: INFO: Created: latency-svc-t9ftp Mar 11 12:05:12.045: INFO: Got endpoints: latency-svc-t9ftp [569.580284ms] Mar 11 12:05:12.047: INFO: Created: latency-svc-4xhmf Mar 11 12:05:12.055: INFO: Got endpoints: latency-svc-4xhmf [574.29151ms] Mar 11 12:05:12.091: INFO: Created: latency-svc-fmlkw Mar 11 12:05:12.103: INFO: Got endpoints: latency-svc-fmlkw [580.018867ms] Mar 11 12:05:12.130: INFO: Created: latency-svc-thj6d Mar 11 12:05:12.139: INFO: Got endpoints: latency-svc-thj6d [543.354536ms] Mar 11 12:05:12.195: INFO: Created: latency-svc-2jzl4 Mar 11 12:05:12.197: INFO: Got endpoints: latency-svc-2jzl4 [596.002394ms] Mar 11 12:05:12.228: INFO: Created: latency-svc-tdktg Mar 11 12:05:12.236: INFO: Got endpoints: latency-svc-tdktg [604.428632ms] Mar 11 12:05:12.271: INFO: Created: latency-svc-hhsbz Mar 11 12:05:12.284: INFO: Got endpoints: latency-svc-hhsbz [628.397679ms] Mar 11 12:05:12.351: INFO: Created: latency-svc-nqhp5 Mar 11 12:05:12.353: INFO: Got endpoints: latency-svc-nqhp5 [672.887896ms] Mar 11 12:05:12.384: INFO: Created: latency-svc-qdjvr Mar 11 12:05:12.393: INFO: Got endpoints: latency-svc-qdjvr [656.443863ms] Mar 11 12:05:12.415: INFO: Created: latency-svc-5ch9x Mar 11 12:05:12.417: INFO: Got endpoints: latency-svc-5ch9x [652.334519ms] Mar 11 12:05:12.444: INFO: Created: latency-svc-5mhql Mar 11 12:05:12.448: INFO: Got endpoints: latency-svc-5mhql [658.671909ms] Mar 11 12:05:12.488: INFO: Created: latency-svc-tndpg Mar 11 12:05:12.508: INFO: Created: latency-svc-48hdp Mar 11 12:05:12.508: INFO: Got endpoints: latency-svc-tndpg [694.818793ms] Mar 11 12:05:12.514: INFO: Got endpoints: latency-svc-48hdp [624.597778ms] Mar 11 12:05:12.559: INFO: Created: latency-svc-zbbhk Mar 11 12:05:12.571: INFO: Got endpoints: latency-svc-zbbhk [672.440562ms] Mar 11 12:05:12.632: INFO: Created: latency-svc-twpxx Mar 11 12:05:12.633: INFO: Got endpoints: latency-svc-twpxx [689.918272ms] Mar 11 12:05:12.659: INFO: Created: latency-svc-q7x8w Mar 11 12:05:12.665: INFO: Got endpoints: latency-svc-q7x8w [619.417901ms] Mar 11 12:05:12.682: INFO: Created: latency-svc-7hdr4 Mar 11 12:05:12.689: INFO: Got endpoints: latency-svc-7hdr4 [634.423895ms] Mar 11 12:05:12.721: INFO: Created: latency-svc-dtvjd Mar 11 12:05:12.726: INFO: Got endpoints: latency-svc-dtvjd [622.405289ms] Mar 11 12:05:12.775: INFO: Created: latency-svc-c79vm Mar 11 12:05:12.778: INFO: Got endpoints: latency-svc-c79vm [638.735019ms] Mar 11 12:05:12.803: INFO: Created: latency-svc-w9x54 Mar 11 12:05:12.810: INFO: Got endpoints: latency-svc-w9x54 [612.907108ms] Mar 11 12:05:12.833: INFO: Created: latency-svc-zzhv8 Mar 11 12:05:12.841: INFO: Got endpoints: latency-svc-zzhv8 [605.213896ms] Mar 11 12:05:12.925: INFO: Created: latency-svc-qshnv Mar 11 12:05:12.931: INFO: Got endpoints: latency-svc-qshnv [646.55433ms] Mar 11 12:05:12.950: INFO: Created: latency-svc-p4k5j Mar 11 12:05:12.952: INFO: Got endpoints: latency-svc-p4k5j [598.709772ms] Mar 11 12:05:12.979: INFO: Created: latency-svc-p7xs6 Mar 11 12:05:12.981: INFO: Got endpoints: latency-svc-p7xs6 [587.50611ms] Mar 11 12:05:13.006: INFO: Created: latency-svc-k7g9c Mar 11 12:05:13.078: INFO: Created: latency-svc-nwtz9 Mar 11 12:05:13.078: INFO: Got endpoints: latency-svc-k7g9c [661.179872ms] Mar 11 12:05:13.093: INFO: Got endpoints: latency-svc-nwtz9 [645.088206ms] Mar 11 12:05:13.112: INFO: Created: latency-svc-wdqzm Mar 11 12:05:13.123: INFO: Got endpoints: latency-svc-wdqzm [615.126323ms] Mar 11 12:05:13.150: INFO: Created: latency-svc-fcs7v Mar 11 12:05:13.242: INFO: Got endpoints: latency-svc-fcs7v [728.431538ms] Mar 11 12:05:13.244: INFO: Created: latency-svc-9dblz Mar 11 12:05:13.273: INFO: Created: latency-svc-wjn9r Mar 11 12:05:13.273: INFO: Got endpoints: latency-svc-9dblz [702.49861ms] Mar 11 12:05:13.306: INFO: Got endpoints: latency-svc-wjn9r [672.95368ms] Mar 11 12:05:13.380: INFO: Created: latency-svc-sgt7d Mar 11 12:05:13.390: INFO: Got endpoints: latency-svc-sgt7d [724.647427ms] Mar 11 12:05:13.423: INFO: Created: latency-svc-vh6pl Mar 11 12:05:13.432: INFO: Got endpoints: latency-svc-vh6pl [742.972881ms] Mar 11 12:05:13.453: INFO: Created: latency-svc-smdsz Mar 11 12:05:13.456: INFO: Got endpoints: latency-svc-smdsz [729.857164ms] Mar 11 12:05:13.480: INFO: Created: latency-svc-s9nc6 Mar 11 12:05:13.525: INFO: Created: latency-svc-c9rhp Mar 11 12:05:13.534: INFO: Got endpoints: latency-svc-s9nc6 [756.356867ms] Mar 11 12:05:13.536: INFO: Got endpoints: latency-svc-c9rhp [726.037017ms] Mar 11 12:05:13.580: INFO: Created: latency-svc-7t84p Mar 11 12:05:13.583: INFO: Got endpoints: latency-svc-7t84p [741.566569ms] Mar 11 12:05:13.603: INFO: Created: latency-svc-k5wvt Mar 11 12:05:13.674: INFO: Got endpoints: latency-svc-k5wvt [742.643302ms] Mar 11 12:05:13.674: INFO: Created: latency-svc-8cqbp Mar 11 12:05:13.678: INFO: Got endpoints: latency-svc-8cqbp [726.37488ms] Mar 11 12:05:13.709: INFO: Created: latency-svc-5fkxx Mar 11 12:05:13.729: INFO: Got endpoints: latency-svc-5fkxx [748.432449ms] Mar 11 12:05:13.753: INFO: Created: latency-svc-rkr4m Mar 11 12:05:13.823: INFO: Got endpoints: latency-svc-rkr4m [744.990201ms] Mar 11 12:05:13.824: INFO: Created: latency-svc-xxdmb Mar 11 12:05:13.840: INFO: Got endpoints: latency-svc-xxdmb [747.655382ms] Mar 11 12:05:13.865: INFO: Created: latency-svc-88tfw Mar 11 12:05:13.873: INFO: Got endpoints: latency-svc-88tfw [749.355408ms] Mar 11 12:05:13.897: INFO: Created: latency-svc-nbbjh Mar 11 12:05:13.955: INFO: Got endpoints: latency-svc-nbbjh [712.910911ms] Mar 11 12:05:13.956: INFO: Created: latency-svc-vhsls Mar 11 12:05:13.981: INFO: Created: latency-svc-pnlzj Mar 11 12:05:13.982: INFO: Got endpoints: latency-svc-vhsls [708.338252ms] Mar 11 12:05:14.015: INFO: Created: latency-svc-nhrj8 Mar 11 12:05:14.024: INFO: Got endpoints: latency-svc-pnlzj [717.115553ms] Mar 11 12:05:14.044: INFO: Created: latency-svc-vxxrk Mar 11 12:05:14.094: INFO: Created: latency-svc-84kfx Mar 11 12:05:14.095: INFO: Got endpoints: latency-svc-nhrj8 [704.99439ms] Mar 11 12:05:14.119: INFO: Created: latency-svc-brr4r Mar 11 12:05:14.127: INFO: Got endpoints: latency-svc-vxxrk [694.483162ms] Mar 11 12:05:14.155: INFO: Created: latency-svc-hxqhs Mar 11 12:05:14.183: INFO: Got endpoints: latency-svc-84kfx [727.335641ms] Mar 11 12:05:14.184: INFO: Created: latency-svc-gqgvz Mar 11 12:05:14.231: INFO: Created: latency-svc-4jqvn Mar 11 12:05:14.236: INFO: Got endpoints: latency-svc-brr4r [701.191733ms] Mar 11 12:05:14.258: INFO: Created: latency-svc-qcgxq Mar 11 12:05:14.294: INFO: Created: latency-svc-54sd4 Mar 11 12:05:14.294: INFO: Got endpoints: latency-svc-hxqhs [758.076268ms] Mar 11 12:05:14.320: INFO: Got endpoints: latency-svc-gqgvz [737.685036ms] Mar 11 12:05:14.321: INFO: Created: latency-svc-blt8k Mar 11 12:05:14.374: INFO: Got endpoints: latency-svc-4jqvn [700.637075ms] Mar 11 12:05:14.375: INFO: Created: latency-svc-lbwwd Mar 11 12:05:14.399: INFO: Created: latency-svc-vhfvf Mar 11 12:05:14.425: INFO: Got endpoints: latency-svc-qcgxq [747.133167ms] Mar 11 12:05:14.425: INFO: Created: latency-svc-f7kjx Mar 11 12:05:14.450: INFO: Created: latency-svc-hc2vg Mar 11 12:05:14.473: INFO: Got endpoints: latency-svc-54sd4 [743.927015ms] Mar 11 12:05:14.474: INFO: Created: latency-svc-qq9c7 Mar 11 12:05:14.524: INFO: Got endpoints: latency-svc-blt8k [701.093724ms] Mar 11 12:05:14.533: INFO: Created: latency-svc-czfxs Mar 11 12:05:14.562: INFO: Created: latency-svc-zsrmj Mar 11 12:05:14.590: INFO: Got endpoints: latency-svc-lbwwd [749.955914ms] Mar 11 12:05:14.591: INFO: Created: latency-svc-8txk6 Mar 11 12:05:14.621: INFO: Created: latency-svc-9bd2k Mar 11 12:05:14.621: INFO: Got endpoints: latency-svc-vhfvf [747.847164ms] Mar 11 12:05:14.674: INFO: Got endpoints: latency-svc-f7kjx [718.139247ms] Mar 11 12:05:14.678: INFO: Created: latency-svc-mjl5w Mar 11 12:05:14.701: INFO: Created: latency-svc-p9z74 Mar 11 12:05:14.719: INFO: Got endpoints: latency-svc-hc2vg [737.853877ms] Mar 11 12:05:14.769: INFO: Got endpoints: latency-svc-qq9c7 [745.653617ms] Mar 11 12:05:14.819: INFO: Got endpoints: latency-svc-czfxs [724.480051ms] Mar 11 12:05:14.869: INFO: Got endpoints: latency-svc-zsrmj [742.683828ms] Mar 11 12:05:14.955: INFO: Got endpoints: latency-svc-8txk6 [771.973875ms] Mar 11 12:05:14.969: INFO: Got endpoints: latency-svc-9bd2k [733.697269ms] Mar 11 12:05:15.020: INFO: Got endpoints: latency-svc-mjl5w [725.389553ms] Mar 11 12:05:15.075: INFO: Got endpoints: latency-svc-p9z74 [754.292961ms] Mar 11 12:05:15.075: INFO: Latencies: [79.490437ms 111.934018ms 135.657599ms 165.131096ms 208.720075ms 238.084058ms 262.489057ms 304.200592ms 364.840276ms 388.539235ms 435.934857ms 476.169736ms 505.570271ms 514.117152ms 515.080549ms 515.766502ms 520.243778ms 522.287792ms 525.21686ms 538.973457ms 539.792555ms 543.354536ms 543.651437ms 544.963949ms 550.339753ms 550.875925ms 550.949678ms 552.563127ms 552.894027ms 553.503427ms 555.663121ms 557.277428ms 557.511435ms 558.824786ms 559.727054ms 561.366198ms 561.916727ms 562.045319ms 565.652558ms 568.380407ms 568.421464ms 568.691067ms 568.771912ms 568.842786ms 568.981388ms 569.580284ms 569.62972ms 573.498494ms 574.29151ms 574.328045ms 574.346906ms 574.364084ms 574.449577ms 574.558319ms 577.590254ms 579.225119ms 580.018867ms 580.338108ms 580.421905ms 580.58396ms 585.758389ms 585.9611ms 586.033265ms 586.079416ms 586.314859ms 586.380063ms 586.661345ms 587.50611ms 588.906973ms 589.435367ms 591.642968ms 592.380319ms 594.226988ms 595.053674ms 595.200646ms 596.002394ms 598.399902ms 598.437093ms 598.484845ms 598.584389ms 598.709772ms 599.10074ms 599.486439ms 601.876602ms 602.004183ms 602.881664ms 602.93926ms 604.428632ms 604.960949ms 605.213896ms 605.988315ms 609.52119ms 610.869249ms 612.824181ms 612.907108ms 612.947958ms 615.126323ms 616.029262ms 616.365908ms 617.104903ms 617.118428ms 619.417901ms 621.778269ms 622.405289ms 623.700734ms 624.597778ms 625.018028ms 625.054014ms 625.14784ms 627.897436ms 628.255771ms 628.320457ms 628.397679ms 634.1263ms 634.423895ms 636.514982ms 638.735019ms 638.985727ms 639.841229ms 645.088206ms 645.326218ms 646.55433ms 647.430613ms 648.806373ms 649.947446ms 650.369415ms 652.270855ms 652.334519ms 656.443863ms 658.099561ms 658.671909ms 660.371262ms 661.179872ms 661.181169ms 664.627165ms 671.788651ms 672.440562ms 672.887896ms 672.95368ms 679.075214ms 689.918272ms 694.051138ms 694.483162ms 694.818793ms 695.434254ms 700.290303ms 700.637075ms 700.637933ms 701.093724ms 701.191733ms 702.49861ms 703.713616ms 704.99439ms 706.35654ms 706.704057ms 708.338252ms 712.910911ms 715.346709ms 715.653638ms 717.115553ms 718.139247ms 720.458517ms 724.390034ms 724.480051ms 724.647427ms 725.389553ms 726.037017ms 726.37488ms 727.335641ms 728.431538ms 729.857164ms 733.697269ms 735.042424ms 737.685036ms 737.853877ms 738.616936ms 741.566569ms 742.643302ms 742.683828ms 742.972881ms 743.138758ms 743.927015ms 744.990201ms 745.653617ms 747.133167ms 747.655382ms 747.847164ms 748.432449ms 748.594131ms 749.355408ms 749.955914ms 753.872425ms 754.292961ms 756.356867ms 758.076268ms 761.16065ms 771.973875ms 775.813294ms 776.77462ms 817.268167ms] Mar 11 12:05:15.075: INFO: 50 %ile: 617.118428ms Mar 11 12:05:15.075: INFO: 90 %ile: 743.138758ms Mar 11 12:05:15.075: INFO: 99 %ile: 776.77462ms Mar 11 12:05:15.075: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:05:15.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-gf6hq" for this suite. Mar 11 12:05:35.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:05:35.111: INFO: namespace: e2e-tests-svc-latency-gf6hq, resource: bindings, ignored listing per whitelist Mar 11 12:05:35.158: INFO: namespace e2e-tests-svc-latency-gf6hq deletion completed in 20.080827013s • [SLOW TEST:30.965 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:05:35.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 11 12:05:35.243: INFO: Waiting up to 5m0s for pod "downward-api-9d36a356-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-downward-api-cnr9x" to be "success or failure" Mar 11 12:05:35.245: INFO: Pod "downward-api-9d36a356-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.98802ms Mar 11 12:05:37.248: INFO: Pod "downward-api-9d36a356-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005083642s Mar 11 12:05:39.252: INFO: Pod "downward-api-9d36a356-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008945761s STEP: Saw pod success Mar 11 12:05:39.252: INFO: Pod "downward-api-9d36a356-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:05:39.255: INFO: Trying to get logs from node hunter-worker pod downward-api-9d36a356-6390-11ea-bacb-0242ac11000a container dapi-container: STEP: delete the pod Mar 11 12:05:39.294: INFO: Waiting for pod downward-api-9d36a356-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:05:39.304: INFO: Pod downward-api-9d36a356-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:05:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cnr9x" for this suite. Mar 11 12:05:45.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:05:45.386: INFO: namespace: e2e-tests-downward-api-cnr9x, resource: bindings, ignored listing per whitelist Mar 11 12:05:45.436: INFO: namespace e2e-tests-downward-api-cnr9x deletion completed in 6.128990256s • [SLOW TEST:10.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:05:45.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dz7v8 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 12:05:45.588: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 12:06:03.697: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.115 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dz7v8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 12:06:03.697: INFO: >>> kubeConfig: /root/.kube/config I0311 12:06:03.729971 6 log.go:172] (0xc000df7290) (0xc001c74960) Create stream I0311 12:06:03.730006 6 log.go:172] (0xc000df7290) (0xc001c74960) Stream added, broadcasting: 1 I0311 12:06:03.732029 6 log.go:172] (0xc000df7290) Reply frame received for 1 I0311 12:06:03.732072 6 log.go:172] (0xc000df7290) (0xc002266280) Create stream I0311 12:06:03.732086 6 log.go:172] (0xc000df7290) (0xc002266280) Stream added, broadcasting: 3 I0311 12:06:03.732927 6 log.go:172] (0xc000df7290) Reply frame received for 3 I0311 12:06:03.732961 6 log.go:172] (0xc000df7290) (0xc001c74a00) Create stream I0311 12:06:03.732972 6 log.go:172] (0xc000df7290) (0xc001c74a00) Stream added, broadcasting: 5 I0311 12:06:03.733677 6 log.go:172] (0xc000df7290) Reply frame received for 5 I0311 12:06:04.793568 6 log.go:172] (0xc000df7290) Data frame received for 3 I0311 12:06:04.793604 6 log.go:172] (0xc002266280) (3) Data frame handling I0311 12:06:04.793621 6 log.go:172] (0xc002266280) (3) Data frame sent I0311 12:06:04.793635 6 log.go:172] (0xc000df7290) Data frame received for 3 I0311 12:06:04.793648 6 log.go:172] (0xc002266280) (3) Data frame handling I0311 12:06:04.793722 6 log.go:172] (0xc000df7290) Data frame received for 5 I0311 12:06:04.793742 6 log.go:172] (0xc001c74a00) (5) Data frame handling I0311 12:06:04.795383 6 log.go:172] (0xc000df7290) Data frame received for 1 I0311 12:06:04.795413 6 log.go:172] (0xc001c74960) (1) Data frame handling I0311 12:06:04.795433 6 log.go:172] (0xc001c74960) (1) Data frame sent I0311 12:06:04.795447 6 log.go:172] (0xc000df7290) (0xc001c74960) Stream removed, broadcasting: 1 I0311 12:06:04.795457 6 log.go:172] (0xc000df7290) Go away received I0311 12:06:04.795592 6 log.go:172] (0xc000df7290) (0xc001c74960) Stream removed, broadcasting: 1 I0311 12:06:04.795615 6 log.go:172] (0xc000df7290) (0xc002266280) Stream removed, broadcasting: 3 I0311 12:06:04.795632 6 log.go:172] (0xc000df7290) (0xc001c74a00) Stream removed, broadcasting: 5 Mar 11 12:06:04.795: INFO: Found all expected endpoints: [netserver-0] Mar 11 12:06:04.798: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.228 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dz7v8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 12:06:04.798: INFO: >>> kubeConfig: /root/.kube/config I0311 12:06:04.828632 6 log.go:172] (0xc0011ce2c0) (0xc000c74140) Create stream I0311 12:06:04.828659 6 log.go:172] (0xc0011ce2c0) (0xc000c74140) Stream added, broadcasting: 1 I0311 12:06:04.833359 6 log.go:172] (0xc0011ce2c0) Reply frame received for 1 I0311 12:06:04.833400 6 log.go:172] (0xc0011ce2c0) (0xc000c741e0) Create stream I0311 12:06:04.833415 6 log.go:172] (0xc0011ce2c0) (0xc000c741e0) Stream added, broadcasting: 3 I0311 12:06:04.836062 6 log.go:172] (0xc0011ce2c0) Reply frame received for 3 I0311 12:06:04.836106 6 log.go:172] (0xc0011ce2c0) (0xc00292b900) Create stream I0311 12:06:04.836126 6 log.go:172] (0xc0011ce2c0) (0xc00292b900) Stream added, broadcasting: 5 I0311 12:06:04.837426 6 log.go:172] (0xc0011ce2c0) Reply frame received for 5 I0311 12:06:05.893522 6 log.go:172] (0xc0011ce2c0) Data frame received for 3 I0311 12:06:05.893558 6 log.go:172] (0xc000c741e0) (3) Data frame handling I0311 12:06:05.893590 6 log.go:172] (0xc000c741e0) (3) Data frame sent I0311 12:06:05.893615 6 log.go:172] (0xc0011ce2c0) Data frame received for 3 I0311 12:06:05.893638 6 log.go:172] (0xc000c741e0) (3) Data frame handling I0311 12:06:05.893711 6 log.go:172] (0xc0011ce2c0) Data frame received for 5 I0311 12:06:05.893769 6 log.go:172] (0xc00292b900) (5) Data frame handling I0311 12:06:05.895434 6 log.go:172] (0xc0011ce2c0) Data frame received for 1 I0311 12:06:05.895462 6 log.go:172] (0xc000c74140) (1) Data frame handling I0311 12:06:05.895506 6 log.go:172] (0xc000c74140) (1) Data frame sent I0311 12:06:05.895531 6 log.go:172] (0xc0011ce2c0) (0xc000c74140) Stream removed, broadcasting: 1 I0311 12:06:05.895576 6 log.go:172] (0xc0011ce2c0) Go away received I0311 12:06:05.895645 6 log.go:172] (0xc0011ce2c0) (0xc000c74140) Stream removed, broadcasting: 1 I0311 12:06:05.895666 6 log.go:172] (0xc0011ce2c0) (0xc000c741e0) Stream removed, broadcasting: 3 I0311 12:06:05.895683 6 log.go:172] (0xc0011ce2c0) (0xc00292b900) Stream removed, broadcasting: 5 Mar 11 12:06:05.895: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:06:05.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dz7v8" for this suite. Mar 11 12:06:27.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:06:27.969: INFO: namespace: e2e-tests-pod-network-test-dz7v8, resource: bindings, ignored listing per whitelist Mar 11 12:06:27.989: INFO: namespace e2e-tests-pod-network-test-dz7v8 deletion completed in 22.089349708s • [SLOW TEST:42.553 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:06:27.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 11 12:06:28.092: INFO: Waiting up to 5m0s for pod "pod-bcb8896f-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-sxtsq" to be "success or failure" Mar 11 12:06:28.112: INFO: Pod "pod-bcb8896f-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.51656ms Mar 11 12:06:30.115: INFO: Pod "pod-bcb8896f-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02296072s STEP: Saw pod success Mar 11 12:06:30.115: INFO: Pod "pod-bcb8896f-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:06:30.118: INFO: Trying to get logs from node hunter-worker pod pod-bcb8896f-6390-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 12:06:30.147: INFO: Waiting for pod pod-bcb8896f-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:06:30.151: INFO: Pod pod-bcb8896f-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:06:30.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sxtsq" for this suite. Mar 11 12:06:36.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:06:36.247: INFO: namespace: e2e-tests-emptydir-sxtsq, resource: bindings, ignored listing per whitelist Mar 11 12:06:36.268: INFO: namespace e2e-tests-emptydir-sxtsq deletion completed in 6.11394395s • [SLOW TEST:8.278 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:06:36.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 11 12:06:36.420: INFO: Waiting up to 5m0s for pod "pod-c1ae6e71-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-bctbw" to be "success or failure" Mar 11 12:06:36.443: INFO: Pod "pod-c1ae6e71-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.204336ms Mar 11 12:06:38.447: INFO: Pod "pod-c1ae6e71-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02650303s STEP: Saw pod success Mar 11 12:06:38.447: INFO: Pod "pod-c1ae6e71-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:06:38.449: INFO: Trying to get logs from node hunter-worker pod pod-c1ae6e71-6390-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 12:06:38.483: INFO: Waiting for pod pod-c1ae6e71-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:06:38.491: INFO: Pod pod-c1ae6e71-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:06:38.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bctbw" for this suite. Mar 11 12:06:44.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:06:44.573: INFO: namespace: e2e-tests-emptydir-bctbw, resource: bindings, ignored listing per whitelist Mar 11 12:06:44.586: INFO: namespace e2e-tests-emptydir-bctbw deletion completed in 6.092272366s • [SLOW TEST:8.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:06:44.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 12:06:44.700: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:06:45.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-g2fd9" for this suite. Mar 11 12:06:51.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:06:51.818: INFO: namespace: e2e-tests-custom-resource-definition-g2fd9, resource: bindings, ignored listing per whitelist Mar 11 12:06:51.882: INFO: namespace e2e-tests-custom-resource-definition-g2fd9 deletion completed in 6.108492774s • [SLOW TEST:7.295 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:06:51.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 11 12:06:51.949: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 12:06:51.958: INFO: Waiting for terminating namespaces to be deleted... Mar 11 12:06:51.960: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 11 12:06:51.964: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:06:51.964: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 12:06:51.964: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:06:51.964: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 12:06:51.964: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 11 12:06:51.968: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:06:51.968: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 12:06:51.968: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 11 12:06:51.968: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cd60d8d0-6390-11ea-bacb-0242ac11000a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cd60d8d0-6390-11ea-bacb-0242ac11000a off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-cd60d8d0-6390-11ea-bacb-0242ac11000a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:06:58.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-h7cwt" for this suite. Mar 11 12:07:10.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:07:10.215: INFO: namespace: e2e-tests-sched-pred-h7cwt, resource: bindings, ignored listing per whitelist Mar 11 12:07:10.227: INFO: namespace e2e-tests-sched-pred-h7cwt deletion completed in 12.094485262s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.345 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:07:10.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 12:07:10.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 11 12:07:10.465: INFO: stderr: "" Mar 11 12:07:10.465: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:07:10.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zt69k" for this suite. Mar 11 12:07:16.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:07:16.543: INFO: namespace: e2e-tests-kubectl-zt69k, resource: bindings, ignored listing per whitelist Mar 11 12:07:16.545: INFO: namespace e2e-tests-kubectl-zt69k deletion completed in 6.077463213s • [SLOW TEST:6.318 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:07:16.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 11 12:07:16.668: INFO: Waiting up to 5m0s for pod "client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-containers-d4l5v" to be "success or failure" Mar 11 12:07:16.678: INFO: Pod "client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.274422ms Mar 11 12:07:18.681: INFO: Pod "client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01262348s STEP: Saw pod success Mar 11 12:07:18.681: INFO: Pod "client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:07:18.684: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 12:07:18.704: INFO: Waiting for pod client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:07:18.709: INFO: Pod client-containers-d9acf8c2-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:07:18.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-d4l5v" for this suite. Mar 11 12:07:24.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:07:24.799: INFO: namespace: e2e-tests-containers-d4l5v, resource: bindings, ignored listing per whitelist Mar 11 12:07:24.823: INFO: namespace e2e-tests-containers-d4l5v deletion completed in 6.111126736s • [SLOW TEST:8.278 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:07:24.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dgvwc [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-dgvwc STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-dgvwc STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-dgvwc STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-dgvwc Mar 11 12:07:26.973: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dgvwc, name: ss-0, uid: debf0a64-6390-11ea-9978-0242ac11000d, status phase: Pending. Waiting for statefulset controller to delete. Mar 11 12:07:27.873: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dgvwc, name: ss-0, uid: debf0a64-6390-11ea-9978-0242ac11000d, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 12:07:27.884: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-dgvwc, name: ss-0, uid: debf0a64-6390-11ea-9978-0242ac11000d, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 12:07:27.895: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-dgvwc STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-dgvwc STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-dgvwc and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 11 12:07:38.030: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dgvwc Mar 11 12:07:38.033: INFO: Scaling statefulset ss to 0 Mar 11 12:07:48.056: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:07:48.058: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:07:48.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dgvwc" for this suite. Mar 11 12:07:54.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:07:54.166: INFO: namespace: e2e-tests-statefulset-dgvwc, resource: bindings, ignored listing per whitelist Mar 11 12:07:54.178: INFO: namespace e2e-tests-statefulset-dgvwc deletion completed in 6.078849501s • [SLOW TEST:29.355 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:07:54.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 11 12:07:54.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-222s7" to be "success or failure" Mar 11 12:07:54.293: INFO: Pod "downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.019365ms Mar 11 12:07:56.296: INFO: Pod "downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032146412s Mar 11 12:07:58.300: INFO: Pod "downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035418636s STEP: Saw pod success Mar 11 12:07:58.300: INFO: Pod "downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:07:58.302: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a container client-container: STEP: delete the pod Mar 11 12:07:58.338: INFO: Waiting for pod downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a to disappear Mar 11 12:07:58.345: INFO: Pod downwardapi-volume-f01591d6-6390-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:07:58.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-222s7" for this suite. Mar 11 12:08:04.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:08:04.429: INFO: namespace: e2e-tests-projected-222s7, resource: bindings, ignored listing per whitelist Mar 11 12:08:04.459: INFO: namespace e2e-tests-projected-222s7 deletion completed in 6.086923066s • [SLOW TEST:10.281 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:08:04.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:09:04.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rdmpl" for this suite. Mar 11 12:09:26.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:09:26.650: INFO: namespace: e2e-tests-container-probe-rdmpl, resource: bindings, ignored listing per whitelist Mar 11 12:09:26.715: INFO: namespace e2e-tests-container-probe-rdmpl deletion completed in 22.106124534s • [SLOW TEST:82.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:09:26.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 11 12:09:26.801: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.694764ms) Mar 11 12:09:26.804: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.559043ms) Mar 11 12:09:26.806: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.357553ms) Mar 11 12:09:26.808: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.118832ms) Mar 11 12:09:26.811: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.339808ms) Mar 11 12:09:26.813: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.147394ms) Mar 11 12:09:26.828: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 14.812125ms) Mar 11 12:09:26.830: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.584295ms) Mar 11 12:09:26.832: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.109908ms) Mar 11 12:09:26.835: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.275883ms) Mar 11 12:09:26.837: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.337738ms) Mar 11 12:09:26.839: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.347067ms) Mar 11 12:09:26.841: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.06045ms) Mar 11 12:09:26.844: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.258176ms) Mar 11 12:09:26.846: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.25409ms) Mar 11 12:09:26.848: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.289733ms) Mar 11 12:09:26.850: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.143365ms) Mar 11 12:09:26.852: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.947022ms) Mar 11 12:09:26.854: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.003416ms) Mar 11 12:09:26.856: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.035562ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:09:26.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-7sq7w" for this suite. Mar 11 12:09:32.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:09:32.955: INFO: namespace: e2e-tests-proxy-7sq7w, resource: bindings, ignored listing per whitelist Mar 11 12:09:32.955: INFO: namespace e2e-tests-proxy-7sq7w deletion completed in 6.095965808s • [SLOW TEST:6.239 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:09:32.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 11 12:09:35.096: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-2af84cb8-6391-11ea-bacb-0242ac11000a", GenerateName:"", Namespace:"e2e-tests-pods-7fcmz", SelfLink:"/api/v1/namespaces/e2e-tests-pods-7fcmz/pods/pod-submit-remove-2af84cb8-6391-11ea-bacb-0242ac11000a", UID:"2af99d95-6391-11ea-9978-0242ac11000d", ResourceVersion:"516245", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719525373, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"54488389", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nfv4j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00140f180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nfv4j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022d03c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c196e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d0410)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d0430)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022d0438), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022d043c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719525373, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719525374, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719525374, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719525373, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.11", PodIP:"10.244.2.120", StartTime:(*v1.Time)(0xc0025015e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002501600), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://16a13506ab99885dd00ded62cb044c5a539a7003ebe929287d7daa8f3e37ee9c"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 11 12:09:40.109: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:09:40.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7fcmz" for this suite. Mar 11 12:09:46.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:09:46.148: INFO: namespace: e2e-tests-pods-7fcmz, resource: bindings, ignored listing per whitelist Mar 11 12:09:46.197: INFO: namespace e2e-tests-pods-7fcmz deletion completed in 6.081960954s • [SLOW TEST:13.242 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:09:46.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-m5xhf Mar 11 12:09:48.341: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-m5xhf STEP: checking the pod's current state and verifying that restartCount is present Mar 11 12:09:48.343: INFO: Initial restart count of pod liveness-exec is 0 Mar 11 12:10:38.429: INFO: Restart count of pod e2e-tests-container-probe-m5xhf/liveness-exec is now 1 (50.08598499s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:10:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m5xhf" for this suite. Mar 11 12:10:44.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:10:44.533: INFO: namespace: e2e-tests-container-probe-m5xhf, resource: bindings, ignored listing per whitelist Mar 11 12:10:44.539: INFO: namespace e2e-tests-container-probe-m5xhf deletion completed in 6.088204803s • [SLOW TEST:58.342 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:10:44.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0311 12:11:15.239470 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 12:11:15.239: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:11:15.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6ph8j" for this suite. Mar 11 12:11:21.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:11:21.309: INFO: namespace: e2e-tests-gc-6ph8j, resource: bindings, ignored listing per whitelist Mar 11 12:11:21.314: INFO: namespace e2e-tests-gc-6ph8j deletion completed in 6.070299915s • [SLOW TEST:36.775 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:11:21.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 11 12:11:21.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:22.921: INFO: stderr: "" Mar 11 12:11:22.921: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:11:22.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:23.058: INFO: stderr: "" Mar 11 12:11:23.058: INFO: stdout: "update-demo-nautilus-kcxvj update-demo-nautilus-mr7d5 " Mar 11 12:11:23.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:23.140: INFO: stderr: "" Mar 11 12:11:23.140: INFO: stdout: "" Mar 11 12:11:23.140: INFO: update-demo-nautilus-kcxvj is created but not running Mar 11 12:11:28.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:28.253: INFO: stderr: "" Mar 11 12:11:28.254: INFO: stdout: "update-demo-nautilus-kcxvj update-demo-nautilus-mr7d5 " Mar 11 12:11:28.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:28.342: INFO: stderr: "" Mar 11 12:11:28.342: INFO: stdout: "true" Mar 11 12:11:28.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:28.426: INFO: stderr: "" Mar 11 12:11:28.426: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:11:28.426: INFO: validating pod update-demo-nautilus-kcxvj Mar 11 12:11:28.429: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:11:28.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:11:28.429: INFO: update-demo-nautilus-kcxvj is verified up and running Mar 11 12:11:28.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr7d5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:28.506: INFO: stderr: "" Mar 11 12:11:28.506: INFO: stdout: "true" Mar 11 12:11:28.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr7d5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:28.570: INFO: stderr: "" Mar 11 12:11:28.570: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:11:28.570: INFO: validating pod update-demo-nautilus-mr7d5 Mar 11 12:11:28.573: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:11:28.573: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:11:28.573: INFO: update-demo-nautilus-mr7d5 is verified up and running STEP: scaling down the replication controller Mar 11 12:11:28.574: INFO: scanned /root for discovery docs: Mar 11 12:11:28.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:29.691: INFO: stderr: "" Mar 11 12:11:29.691: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:11:29.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:29.807: INFO: stderr: "" Mar 11 12:11:29.807: INFO: stdout: "update-demo-nautilus-kcxvj update-demo-nautilus-mr7d5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 11 12:11:34.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:34.896: INFO: stderr: "" Mar 11 12:11:34.896: INFO: stdout: "update-demo-nautilus-kcxvj " Mar 11 12:11:34.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:34.958: INFO: stderr: "" Mar 11 12:11:34.958: INFO: stdout: "true" Mar 11 12:11:34.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:35.024: INFO: stderr: "" Mar 11 12:11:35.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:11:35.024: INFO: validating pod update-demo-nautilus-kcxvj Mar 11 12:11:35.026: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:11:35.026: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:11:35.027: INFO: update-demo-nautilus-kcxvj is verified up and running STEP: scaling up the replication controller Mar 11 12:11:35.028: INFO: scanned /root for discovery docs: Mar 11 12:11:35.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:36.138: INFO: stderr: "" Mar 11 12:11:36.138: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:11:36.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:36.223: INFO: stderr: "" Mar 11 12:11:36.223: INFO: stdout: "update-demo-nautilus-5lk4z update-demo-nautilus-kcxvj " Mar 11 12:11:36.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lk4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:36.318: INFO: stderr: "" Mar 11 12:11:36.318: INFO: stdout: "" Mar 11 12:11:36.318: INFO: update-demo-nautilus-5lk4z is created but not running Mar 11 12:11:41.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.430: INFO: stderr: "" Mar 11 12:11:41.430: INFO: stdout: "update-demo-nautilus-5lk4z update-demo-nautilus-kcxvj " Mar 11 12:11:41.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lk4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.500: INFO: stderr: "" Mar 11 12:11:41.500: INFO: stdout: "true" Mar 11 12:11:41.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5lk4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.585: INFO: stderr: "" Mar 11 12:11:41.585: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:11:41.585: INFO: validating pod update-demo-nautilus-5lk4z Mar 11 12:11:41.589: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:11:41.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:11:41.589: INFO: update-demo-nautilus-5lk4z is verified up and running Mar 11 12:11:41.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.656: INFO: stderr: "" Mar 11 12:11:41.656: INFO: stdout: "true" Mar 11 12:11:41.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcxvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.730: INFO: stderr: "" Mar 11 12:11:41.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:11:41.730: INFO: validating pod update-demo-nautilus-kcxvj Mar 11 12:11:41.732: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:11:41.732: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:11:41.732: INFO: update-demo-nautilus-kcxvj is verified up and running STEP: using delete to clean up resources Mar 11 12:11:41.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.800: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 12:11:41.800: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 11 12:11:41.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6zscb' Mar 11 12:11:41.881: INFO: stderr: "No resources found.\n" Mar 11 12:11:41.881: INFO: stdout: "" Mar 11 12:11:41.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6zscb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 12:11:41.958: INFO: stderr: "" Mar 11 12:11:41.958: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:11:41.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6zscb" for this suite. Mar 11 12:12:03.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:12:04.004: INFO: namespace: e2e-tests-kubectl-6zscb, resource: bindings, ignored listing per whitelist Mar 11 12:12:04.047: INFO: namespace e2e-tests-kubectl-6zscb deletion completed in 22.086851148s • [SLOW TEST:42.733 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:12:04.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5prnf [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 11 12:12:04.173: INFO: Found 0 stateful pods, waiting for 3 Mar 11 12:12:14.178: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:12:14.178: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:12:14.178: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:12:14.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5prnf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:12:14.420: INFO: stderr: "I0311 12:12:14.321922 2940 log.go:172] (0xc000154790) (0xc0005ef360) Create stream\nI0311 12:12:14.321972 2940 log.go:172] (0xc000154790) (0xc0005ef360) Stream added, broadcasting: 1\nI0311 12:12:14.324429 2940 log.go:172] (0xc000154790) Reply frame received for 1\nI0311 12:12:14.324471 2940 log.go:172] (0xc000154790) (0xc00051a000) Create stream\nI0311 12:12:14.324481 2940 log.go:172] (0xc000154790) (0xc00051a000) Stream added, broadcasting: 3\nI0311 12:12:14.325190 2940 log.go:172] (0xc000154790) Reply frame received for 3\nI0311 12:12:14.325233 2940 log.go:172] (0xc000154790) (0xc00051a0a0) Create stream\nI0311 12:12:14.325263 2940 log.go:172] (0xc000154790) (0xc00051a0a0) Stream added, broadcasting: 5\nI0311 12:12:14.326077 2940 log.go:172] (0xc000154790) Reply frame received for 5\nI0311 12:12:14.414969 2940 log.go:172] (0xc000154790) Data frame received for 3\nI0311 12:12:14.414995 2940 log.go:172] (0xc00051a000) (3) Data frame handling\nI0311 12:12:14.415013 2940 log.go:172] (0xc00051a000) (3) Data frame sent\nI0311 12:12:14.415020 2940 log.go:172] (0xc000154790) Data frame received for 3\nI0311 12:12:14.415028 2940 log.go:172] (0xc00051a000) (3) Data frame handling\nI0311 12:12:14.415561 2940 log.go:172] (0xc000154790) Data frame received for 5\nI0311 12:12:14.415578 2940 log.go:172] (0xc00051a0a0) (5) Data frame handling\nI0311 12:12:14.416990 2940 log.go:172] (0xc000154790) Data frame received for 1\nI0311 12:12:14.417017 2940 log.go:172] (0xc0005ef360) (1) Data frame handling\nI0311 12:12:14.417031 2940 log.go:172] (0xc0005ef360) (1) Data frame sent\nI0311 12:12:14.417046 2940 log.go:172] (0xc000154790) (0xc0005ef360) Stream removed, broadcasting: 1\nI0311 12:12:14.417063 2940 log.go:172] (0xc000154790) Go away received\nI0311 12:12:14.417210 2940 log.go:172] (0xc000154790) (0xc0005ef360) Stream removed, broadcasting: 1\nI0311 12:12:14.417225 2940 log.go:172] (0xc000154790) (0xc00051a000) Stream removed, broadcasting: 3\nI0311 12:12:14.417235 2940 log.go:172] (0xc000154790) (0xc00051a0a0) Stream removed, broadcasting: 5\n" Mar 11 12:12:14.420: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:12:14.420: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 11 12:12:24.448: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 11 12:12:34.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5prnf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 12:12:34.693: INFO: stderr: "I0311 12:12:34.618336 2961 log.go:172] (0xc000138840) (0xc000758640) Create stream\nI0311 12:12:34.618373 2961 log.go:172] (0xc000138840) (0xc000758640) Stream added, broadcasting: 1\nI0311 12:12:34.619822 2961 log.go:172] (0xc000138840) Reply frame received for 1\nI0311 12:12:34.619857 2961 log.go:172] (0xc000138840) (0xc000652dc0) Create stream\nI0311 12:12:34.619866 2961 log.go:172] (0xc000138840) (0xc000652dc0) Stream added, broadcasting: 3\nI0311 12:12:34.620622 2961 log.go:172] (0xc000138840) Reply frame received for 3\nI0311 12:12:34.620651 2961 log.go:172] (0xc000138840) (0xc0005b0000) Create stream\nI0311 12:12:34.620676 2961 log.go:172] (0xc000138840) (0xc0005b0000) Stream added, broadcasting: 5\nI0311 12:12:34.621250 2961 log.go:172] (0xc000138840) Reply frame received for 5\nI0311 12:12:34.690332 2961 log.go:172] (0xc000138840) Data frame received for 5\nI0311 12:12:34.690348 2961 log.go:172] (0xc0005b0000) (5) Data frame handling\nI0311 12:12:34.690383 2961 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:12:34.690402 2961 log.go:172] (0xc000652dc0) (3) Data frame handling\nI0311 12:12:34.690411 2961 log.go:172] (0xc000652dc0) (3) Data frame sent\nI0311 12:12:34.690417 2961 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:12:34.690421 2961 log.go:172] (0xc000652dc0) (3) Data frame handling\nI0311 12:12:34.691006 2961 log.go:172] (0xc000138840) Data frame received for 1\nI0311 12:12:34.691016 2961 log.go:172] (0xc000758640) (1) Data frame handling\nI0311 12:12:34.691021 2961 log.go:172] (0xc000758640) (1) Data frame sent\nI0311 12:12:34.691029 2961 log.go:172] (0xc000138840) (0xc000758640) Stream removed, broadcasting: 1\nI0311 12:12:34.691061 2961 log.go:172] (0xc000138840) Go away received\nI0311 12:12:34.691205 2961 log.go:172] (0xc000138840) (0xc000758640) Stream removed, broadcasting: 1\nI0311 12:12:34.691217 2961 log.go:172] (0xc000138840) (0xc000652dc0) Stream removed, broadcasting: 3\nI0311 12:12:34.691222 2961 log.go:172] (0xc000138840) (0xc0005b0000) Stream removed, broadcasting: 5\n" Mar 11 12:12:34.693: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 12:12:34.693: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 12:12:44.709: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:12:44.709: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:12:44.709: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:12:44.709: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:12:54.716: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:12:54.716: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:13:04.716: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:13:04.716: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 11 12:13:14.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5prnf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:13:14.945: INFO: stderr: "I0311 12:13:14.855928 2983 log.go:172] (0xc000138580) (0xc00089a5a0) Create stream\nI0311 12:13:14.856000 2983 log.go:172] (0xc000138580) (0xc00089a5a0) Stream added, broadcasting: 1\nI0311 12:13:14.858026 2983 log.go:172] (0xc000138580) Reply frame received for 1\nI0311 12:13:14.858058 2983 log.go:172] (0xc000138580) (0xc0005ded20) Create stream\nI0311 12:13:14.858067 2983 log.go:172] (0xc000138580) (0xc0005ded20) Stream added, broadcasting: 3\nI0311 12:13:14.858916 2983 log.go:172] (0xc000138580) Reply frame received for 3\nI0311 12:13:14.858940 2983 log.go:172] (0xc000138580) (0xc00089a640) Create stream\nI0311 12:13:14.858947 2983 log.go:172] (0xc000138580) (0xc00089a640) Stream added, broadcasting: 5\nI0311 12:13:14.859733 2983 log.go:172] (0xc000138580) Reply frame received for 5\nI0311 12:13:14.941016 2983 log.go:172] (0xc000138580) Data frame received for 5\nI0311 12:13:14.941076 2983 log.go:172] (0xc000138580) Data frame received for 3\nI0311 12:13:14.941113 2983 log.go:172] (0xc0005ded20) (3) Data frame handling\nI0311 12:13:14.941134 2983 log.go:172] (0xc0005ded20) (3) Data frame sent\nI0311 12:13:14.941152 2983 log.go:172] (0xc000138580) Data frame received for 3\nI0311 12:13:14.941168 2983 log.go:172] (0xc00089a640) (5) Data frame handling\nI0311 12:13:14.941200 2983 log.go:172] (0xc0005ded20) (3) Data frame handling\nI0311 12:13:14.942482 2983 log.go:172] (0xc000138580) Data frame received for 1\nI0311 12:13:14.942504 2983 log.go:172] (0xc00089a5a0) (1) Data frame handling\nI0311 12:13:14.942513 2983 log.go:172] (0xc00089a5a0) (1) Data frame sent\nI0311 12:13:14.942532 2983 log.go:172] (0xc000138580) (0xc00089a5a0) Stream removed, broadcasting: 1\nI0311 12:13:14.942546 2983 log.go:172] (0xc000138580) Go away received\nI0311 12:13:14.942761 2983 log.go:172] (0xc000138580) (0xc00089a5a0) Stream removed, broadcasting: 1\nI0311 12:13:14.942790 2983 log.go:172] (0xc000138580) (0xc0005ded20) Stream removed, broadcasting: 3\nI0311 12:13:14.942800 2983 log.go:172] (0xc000138580) (0xc00089a640) Stream removed, broadcasting: 5\n" Mar 11 12:13:14.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:13:14.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 12:13:24.975: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 11 12:13:35.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5prnf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 12:13:35.236: INFO: stderr: "I0311 12:13:35.156849 3007 log.go:172] (0xc0007482c0) (0xc0006ae640) Create stream\nI0311 12:13:35.156922 3007 log.go:172] (0xc0007482c0) (0xc0006ae640) Stream added, broadcasting: 1\nI0311 12:13:35.159116 3007 log.go:172] (0xc0007482c0) Reply frame received for 1\nI0311 12:13:35.159157 3007 log.go:172] (0xc0007482c0) (0xc000648d20) Create stream\nI0311 12:13:35.159167 3007 log.go:172] (0xc0007482c0) (0xc000648d20) Stream added, broadcasting: 3\nI0311 12:13:35.160079 3007 log.go:172] (0xc0007482c0) Reply frame received for 3\nI0311 12:13:35.160110 3007 log.go:172] (0xc0007482c0) (0xc0003b4000) Create stream\nI0311 12:13:35.160128 3007 log.go:172] (0xc0007482c0) (0xc0003b4000) Stream added, broadcasting: 5\nI0311 12:13:35.161007 3007 log.go:172] (0xc0007482c0) Reply frame received for 5\nI0311 12:13:35.230577 3007 log.go:172] (0xc0007482c0) Data frame received for 5\nI0311 12:13:35.230602 3007 log.go:172] (0xc0003b4000) (5) Data frame handling\nI0311 12:13:35.230628 3007 log.go:172] (0xc0007482c0) Data frame received for 3\nI0311 12:13:35.230634 3007 log.go:172] (0xc000648d20) (3) Data frame handling\nI0311 12:13:35.230641 3007 log.go:172] (0xc000648d20) (3) Data frame sent\nI0311 12:13:35.230648 3007 log.go:172] (0xc0007482c0) Data frame received for 3\nI0311 12:13:35.230654 3007 log.go:172] (0xc000648d20) (3) Data frame handling\nI0311 12:13:35.232466 3007 log.go:172] (0xc0007482c0) Data frame received for 1\nI0311 12:13:35.232485 3007 log.go:172] (0xc0006ae640) (1) Data frame handling\nI0311 12:13:35.232497 3007 log.go:172] (0xc0006ae640) (1) Data frame sent\nI0311 12:13:35.232510 3007 log.go:172] (0xc0007482c0) (0xc0006ae640) Stream removed, broadcasting: 1\nI0311 12:13:35.232523 3007 log.go:172] (0xc0007482c0) Go away received\nI0311 12:13:35.232722 3007 log.go:172] (0xc0007482c0) (0xc0006ae640) Stream removed, broadcasting: 1\nI0311 12:13:35.232742 3007 log.go:172] (0xc0007482c0) (0xc000648d20) Stream removed, broadcasting: 3\nI0311 12:13:35.232752 3007 log.go:172] (0xc0007482c0) (0xc0003b4000) Stream removed, broadcasting: 5\n" Mar 11 12:13:35.236: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 12:13:35.236: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 12:13:45.258: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:13:45.258: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 11 12:13:45.258: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 11 12:13:45.258: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 11 12:13:55.265: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:13:55.266: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 11 12:13:55.266: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 11 12:14:05.266: INFO: Waiting for StatefulSet e2e-tests-statefulset-5prnf/ss2 to complete update Mar 11 12:14:05.266: INFO: Waiting for Pod e2e-tests-statefulset-5prnf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 11 12:14:15.264: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5prnf Mar 11 12:14:15.266: INFO: Scaling statefulset ss2 to 0 Mar 11 12:14:35.287: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:14:35.289: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:14:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5prnf" for this suite. Mar 11 12:14:41.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:14:41.373: INFO: namespace: e2e-tests-statefulset-5prnf, resource: bindings, ignored listing per whitelist Mar 11 12:14:41.420: INFO: namespace e2e-tests-statefulset-5prnf deletion completed in 6.096332345s • [SLOW TEST:157.373 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:14:41.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:14:43.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-f5svs" for this suite. Mar 11 12:14:49.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:14:49.719: INFO: namespace: e2e-tests-emptydir-wrapper-f5svs, resource: bindings, ignored listing per whitelist Mar 11 12:14:49.788: INFO: namespace e2e-tests-emptydir-wrapper-f5svs deletion completed in 6.108912761s • [SLOW TEST:8.368 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:14:49.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-f6q6x [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-f6q6x STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-f6q6x Mar 11 12:14:49.899: INFO: Found 0 stateful pods, waiting for 1 Mar 11 12:14:59.904: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 11 12:14:59.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:15:00.172: INFO: stderr: "I0311 12:15:00.082756 3029 log.go:172] (0xc000138630) (0xc0005c14a0) Create stream\nI0311 12:15:00.082816 3029 log.go:172] (0xc000138630) (0xc0005c14a0) Stream added, broadcasting: 1\nI0311 12:15:00.084542 3029 log.go:172] (0xc000138630) Reply frame received for 1\nI0311 12:15:00.084586 3029 log.go:172] (0xc000138630) (0xc0005c1540) Create stream\nI0311 12:15:00.084595 3029 log.go:172] (0xc000138630) (0xc0005c1540) Stream added, broadcasting: 3\nI0311 12:15:00.085482 3029 log.go:172] (0xc000138630) Reply frame received for 3\nI0311 12:15:00.085515 3029 log.go:172] (0xc000138630) (0xc000624000) Create stream\nI0311 12:15:00.085526 3029 log.go:172] (0xc000138630) (0xc000624000) Stream added, broadcasting: 5\nI0311 12:15:00.086474 3029 log.go:172] (0xc000138630) Reply frame received for 5\nI0311 12:15:00.167650 3029 log.go:172] (0xc000138630) Data frame received for 3\nI0311 12:15:00.167686 3029 log.go:172] (0xc0005c1540) (3) Data frame handling\nI0311 12:15:00.167703 3029 log.go:172] (0xc0005c1540) (3) Data frame sent\nI0311 12:15:00.167717 3029 log.go:172] (0xc000138630) Data frame received for 3\nI0311 12:15:00.167726 3029 log.go:172] (0xc0005c1540) (3) Data frame handling\nI0311 12:15:00.167884 3029 log.go:172] (0xc000138630) Data frame received for 5\nI0311 12:15:00.167901 3029 log.go:172] (0xc000624000) (5) Data frame handling\nI0311 12:15:00.169517 3029 log.go:172] (0xc000138630) Data frame received for 1\nI0311 12:15:00.169537 3029 log.go:172] (0xc0005c14a0) (1) Data frame handling\nI0311 12:15:00.169545 3029 log.go:172] (0xc0005c14a0) (1) Data frame sent\nI0311 12:15:00.169554 3029 log.go:172] (0xc000138630) (0xc0005c14a0) Stream removed, broadcasting: 1\nI0311 12:15:00.169588 3029 log.go:172] (0xc000138630) Go away received\nI0311 12:15:00.169682 3029 log.go:172] (0xc000138630) (0xc0005c14a0) Stream removed, broadcasting: 1\nI0311 12:15:00.169697 3029 log.go:172] (0xc000138630) (0xc0005c1540) Stream removed, broadcasting: 3\nI0311 12:15:00.169704 3029 log.go:172] (0xc000138630) (0xc000624000) Stream removed, broadcasting: 5\n" Mar 11 12:15:00.172: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:15:00.172: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 12:15:00.177: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 11 12:15:10.181: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 12:15:10.181: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:15:10.228: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:10.228: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:10.228: INFO: Mar 11 12:15:10.228: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 11 12:15:11.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.962780882s Mar 11 12:15:12.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.782978091s Mar 11 12:15:13.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.778545341s Mar 11 12:15:14.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.767949812s Mar 11 12:15:15.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.764210075s Mar 11 12:15:16.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.760501793s Mar 11 12:15:17.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.755859611s Mar 11 12:15:18.444: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.751077004s Mar 11 12:15:19.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 746.568036ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-f6q6x Mar 11 12:15:20.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 12:15:20.669: INFO: stderr: "I0311 12:15:20.599382 3050 log.go:172] (0xc000740370) (0xc00075c640) Create stream\nI0311 12:15:20.599430 3050 log.go:172] (0xc000740370) (0xc00075c640) Stream added, broadcasting: 1\nI0311 12:15:20.602045 3050 log.go:172] (0xc000740370) Reply frame received for 1\nI0311 12:15:20.602085 3050 log.go:172] (0xc000740370) (0xc00064cd20) Create stream\nI0311 12:15:20.602099 3050 log.go:172] (0xc000740370) (0xc00064cd20) Stream added, broadcasting: 3\nI0311 12:15:20.603007 3050 log.go:172] (0xc000740370) Reply frame received for 3\nI0311 12:15:20.603036 3050 log.go:172] (0xc000740370) (0xc00064ce60) Create stream\nI0311 12:15:20.603048 3050 log.go:172] (0xc000740370) (0xc00064ce60) Stream added, broadcasting: 5\nI0311 12:15:20.603801 3050 log.go:172] (0xc000740370) Reply frame received for 5\nI0311 12:15:20.665021 3050 log.go:172] (0xc000740370) Data frame received for 3\nI0311 12:15:20.665048 3050 log.go:172] (0xc00064cd20) (3) Data frame handling\nI0311 12:15:20.665059 3050 log.go:172] (0xc00064cd20) (3) Data frame sent\nI0311 12:15:20.665069 3050 log.go:172] (0xc000740370) Data frame received for 3\nI0311 12:15:20.665080 3050 log.go:172] (0xc00064cd20) (3) Data frame handling\nI0311 12:15:20.665172 3050 log.go:172] (0xc000740370) Data frame received for 5\nI0311 12:15:20.665191 3050 log.go:172] (0xc00064ce60) (5) Data frame handling\nI0311 12:15:20.666383 3050 log.go:172] (0xc000740370) Data frame received for 1\nI0311 12:15:20.666410 3050 log.go:172] (0xc00075c640) (1) Data frame handling\nI0311 12:15:20.666422 3050 log.go:172] (0xc00075c640) (1) Data frame sent\nI0311 12:15:20.666433 3050 log.go:172] (0xc000740370) (0xc00075c640) Stream removed, broadcasting: 1\nI0311 12:15:20.666449 3050 log.go:172] (0xc000740370) Go away received\nI0311 12:15:20.666676 3050 log.go:172] (0xc000740370) (0xc00075c640) Stream removed, broadcasting: 1\nI0311 12:15:20.666693 3050 log.go:172] (0xc000740370) (0xc00064cd20) Stream removed, broadcasting: 3\nI0311 12:15:20.666700 3050 log.go:172] (0xc000740370) (0xc00064ce60) Stream removed, broadcasting: 5\n" Mar 11 12:15:20.669: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 12:15:20.669: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 12:15:20.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 12:15:20.847: INFO: stderr: "I0311 12:15:20.778264 3073 log.go:172] (0xc000138840) (0xc0005ef220) Create stream\nI0311 12:15:20.778308 3073 log.go:172] (0xc000138840) (0xc0005ef220) Stream added, broadcasting: 1\nI0311 12:15:20.780474 3073 log.go:172] (0xc000138840) Reply frame received for 1\nI0311 12:15:20.780536 3073 log.go:172] (0xc000138840) (0xc0005e8000) Create stream\nI0311 12:15:20.780575 3073 log.go:172] (0xc000138840) (0xc0005e8000) Stream added, broadcasting: 3\nI0311 12:15:20.781463 3073 log.go:172] (0xc000138840) Reply frame received for 3\nI0311 12:15:20.781493 3073 log.go:172] (0xc000138840) (0xc0005ef2c0) Create stream\nI0311 12:15:20.781503 3073 log.go:172] (0xc000138840) (0xc0005ef2c0) Stream added, broadcasting: 5\nI0311 12:15:20.782355 3073 log.go:172] (0xc000138840) Reply frame received for 5\nI0311 12:15:20.843408 3073 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:15:20.843439 3073 log.go:172] (0xc0005e8000) (3) Data frame handling\nI0311 12:15:20.843457 3073 log.go:172] (0xc0005e8000) (3) Data frame sent\nI0311 12:15:20.843467 3073 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:15:20.843475 3073 log.go:172] (0xc0005e8000) (3) Data frame handling\nI0311 12:15:20.843632 3073 log.go:172] (0xc000138840) Data frame received for 5\nI0311 12:15:20.843648 3073 log.go:172] (0xc0005ef2c0) (5) Data frame handling\nI0311 12:15:20.843663 3073 log.go:172] (0xc0005ef2c0) (5) Data frame sent\nI0311 12:15:20.843674 3073 log.go:172] (0xc000138840) Data frame received for 5\nI0311 12:15:20.843681 3073 log.go:172] (0xc0005ef2c0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0311 12:15:20.844669 3073 log.go:172] (0xc000138840) Data frame received for 1\nI0311 12:15:20.844682 3073 log.go:172] (0xc0005ef220) (1) Data frame handling\nI0311 12:15:20.844693 3073 log.go:172] (0xc0005ef220) (1) Data frame sent\nI0311 12:15:20.844702 3073 log.go:172] (0xc000138840) (0xc0005ef220) Stream removed, broadcasting: 1\nI0311 12:15:20.844717 3073 log.go:172] (0xc000138840) Go away received\nI0311 12:15:20.844906 3073 log.go:172] (0xc000138840) (0xc0005ef220) Stream removed, broadcasting: 1\nI0311 12:15:20.844919 3073 log.go:172] (0xc000138840) (0xc0005e8000) Stream removed, broadcasting: 3\nI0311 12:15:20.844925 3073 log.go:172] (0xc000138840) (0xc0005ef2c0) Stream removed, broadcasting: 5\n" Mar 11 12:15:20.847: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 12:15:20.847: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 12:15:20.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 12:15:20.982: INFO: stderr: "I0311 12:15:20.930482 3095 log.go:172] (0xc0006ca370) (0xc0006f4640) Create stream\nI0311 12:15:20.930510 3095 log.go:172] (0xc0006ca370) (0xc0006f4640) Stream added, broadcasting: 1\nI0311 12:15:20.933412 3095 log.go:172] (0xc0006ca370) Reply frame received for 1\nI0311 12:15:20.933434 3095 log.go:172] (0xc0006ca370) (0xc0000d0dc0) Create stream\nI0311 12:15:20.933443 3095 log.go:172] (0xc0006ca370) (0xc0000d0dc0) Stream added, broadcasting: 3\nI0311 12:15:20.933953 3095 log.go:172] (0xc0006ca370) Reply frame received for 3\nI0311 12:15:20.933978 3095 log.go:172] (0xc0006ca370) (0xc0000d0f00) Create stream\nI0311 12:15:20.933988 3095 log.go:172] (0xc0006ca370) (0xc0000d0f00) Stream added, broadcasting: 5\nI0311 12:15:20.936387 3095 log.go:172] (0xc0006ca370) Reply frame received for 5\nI0311 12:15:20.979198 3095 log.go:172] (0xc0006ca370) Data frame received for 3\nI0311 12:15:20.979217 3095 log.go:172] (0xc0000d0dc0) (3) Data frame handling\nI0311 12:15:20.979227 3095 log.go:172] (0xc0000d0dc0) (3) Data frame sent\nI0311 12:15:20.979235 3095 log.go:172] (0xc0006ca370) Data frame received for 3\nI0311 12:15:20.979240 3095 log.go:172] (0xc0000d0dc0) (3) Data frame handling\nI0311 12:15:20.979299 3095 log.go:172] (0xc0006ca370) Data frame received for 5\nI0311 12:15:20.979314 3095 log.go:172] (0xc0000d0f00) (5) Data frame handling\nI0311 12:15:20.979327 3095 log.go:172] (0xc0000d0f00) (5) Data frame sent\nI0311 12:15:20.979337 3095 log.go:172] (0xc0006ca370) Data frame received for 5\nI0311 12:15:20.979343 3095 log.go:172] (0xc0000d0f00) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0311 12:15:20.980134 3095 log.go:172] (0xc0006ca370) Data frame received for 1\nI0311 12:15:20.980153 3095 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0311 12:15:20.980168 3095 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0311 12:15:20.980181 3095 log.go:172] (0xc0006ca370) (0xc0006f4640) Stream removed, broadcasting: 1\nI0311 12:15:20.980196 3095 log.go:172] (0xc0006ca370) Go away received\nI0311 12:15:20.980432 3095 log.go:172] (0xc0006ca370) (0xc0006f4640) Stream removed, broadcasting: 1\nI0311 12:15:20.980443 3095 log.go:172] (0xc0006ca370) (0xc0000d0dc0) Stream removed, broadcasting: 3\nI0311 12:15:20.980450 3095 log.go:172] (0xc0006ca370) (0xc0000d0f00) Stream removed, broadcasting: 5\n" Mar 11 12:15:20.982: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 12:15:20.982: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 12:15:20.985: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:15:20.985: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:15:20.985: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 11 12:15:20.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:15:21.122: INFO: stderr: "I0311 12:15:21.073029 3117 log.go:172] (0xc00013a840) (0xc0005d3360) Create stream\nI0311 12:15:21.073063 3117 log.go:172] (0xc00013a840) (0xc0005d3360) Stream added, broadcasting: 1\nI0311 12:15:21.074586 3117 log.go:172] (0xc00013a840) Reply frame received for 1\nI0311 12:15:21.074624 3117 log.go:172] (0xc00013a840) (0xc0005d3400) Create stream\nI0311 12:15:21.074633 3117 log.go:172] (0xc00013a840) (0xc0005d3400) Stream added, broadcasting: 3\nI0311 12:15:21.075367 3117 log.go:172] (0xc00013a840) Reply frame received for 3\nI0311 12:15:21.075390 3117 log.go:172] (0xc00013a840) (0xc00075e000) Create stream\nI0311 12:15:21.075400 3117 log.go:172] (0xc00013a840) (0xc00075e000) Stream added, broadcasting: 5\nI0311 12:15:21.075918 3117 log.go:172] (0xc00013a840) Reply frame received for 5\nI0311 12:15:21.118996 3117 log.go:172] (0xc00013a840) Data frame received for 5\nI0311 12:15:21.119017 3117 log.go:172] (0xc00075e000) (5) Data frame handling\nI0311 12:15:21.119042 3117 log.go:172] (0xc00013a840) Data frame received for 3\nI0311 12:15:21.119070 3117 log.go:172] (0xc0005d3400) (3) Data frame handling\nI0311 12:15:21.119081 3117 log.go:172] (0xc0005d3400) (3) Data frame sent\nI0311 12:15:21.119090 3117 log.go:172] (0xc00013a840) Data frame received for 3\nI0311 12:15:21.119094 3117 log.go:172] (0xc0005d3400) (3) Data frame handling\nI0311 12:15:21.120145 3117 log.go:172] (0xc00013a840) Data frame received for 1\nI0311 12:15:21.120159 3117 log.go:172] (0xc0005d3360) (1) Data frame handling\nI0311 12:15:21.120164 3117 log.go:172] (0xc0005d3360) (1) Data frame sent\nI0311 12:15:21.120171 3117 log.go:172] (0xc00013a840) (0xc0005d3360) Stream removed, broadcasting: 1\nI0311 12:15:21.120180 3117 log.go:172] (0xc00013a840) Go away received\nI0311 12:15:21.120440 3117 log.go:172] (0xc00013a840) (0xc0005d3360) Stream removed, broadcasting: 1\nI0311 12:15:21.120465 3117 log.go:172] (0xc00013a840) (0xc0005d3400) Stream removed, broadcasting: 3\nI0311 12:15:21.120481 3117 log.go:172] (0xc00013a840) (0xc00075e000) Stream removed, broadcasting: 5\n" Mar 11 12:15:21.122: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:15:21.122: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 12:15:21.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:15:21.319: INFO: stderr: "I0311 12:15:21.210699 3140 log.go:172] (0xc0007f62c0) (0xc000702640) Create stream\nI0311 12:15:21.210735 3140 log.go:172] (0xc0007f62c0) (0xc000702640) Stream added, broadcasting: 1\nI0311 12:15:21.212384 3140 log.go:172] (0xc0007f62c0) Reply frame received for 1\nI0311 12:15:21.212422 3140 log.go:172] (0xc0007f62c0) (0xc0007b4c80) Create stream\nI0311 12:15:21.212436 3140 log.go:172] (0xc0007f62c0) (0xc0007b4c80) Stream added, broadcasting: 3\nI0311 12:15:21.212976 3140 log.go:172] (0xc0007f62c0) Reply frame received for 3\nI0311 12:15:21.213001 3140 log.go:172] (0xc0007f62c0) (0xc00053c000) Create stream\nI0311 12:15:21.213010 3140 log.go:172] (0xc0007f62c0) (0xc00053c000) Stream added, broadcasting: 5\nI0311 12:15:21.213642 3140 log.go:172] (0xc0007f62c0) Reply frame received for 5\nI0311 12:15:21.315501 3140 log.go:172] (0xc0007f62c0) Data frame received for 3\nI0311 12:15:21.315535 3140 log.go:172] (0xc0007b4c80) (3) Data frame handling\nI0311 12:15:21.315563 3140 log.go:172] (0xc0007b4c80) (3) Data frame sent\nI0311 12:15:21.315578 3140 log.go:172] (0xc0007f62c0) Data frame received for 3\nI0311 12:15:21.315606 3140 log.go:172] (0xc0007b4c80) (3) Data frame handling\nI0311 12:15:21.315701 3140 log.go:172] (0xc0007f62c0) Data frame received for 5\nI0311 12:15:21.315715 3140 log.go:172] (0xc00053c000) (5) Data frame handling\nI0311 12:15:21.317027 3140 log.go:172] (0xc0007f62c0) Data frame received for 1\nI0311 12:15:21.317046 3140 log.go:172] (0xc000702640) (1) Data frame handling\nI0311 12:15:21.317057 3140 log.go:172] (0xc000702640) (1) Data frame sent\nI0311 12:15:21.317067 3140 log.go:172] (0xc0007f62c0) (0xc000702640) Stream removed, broadcasting: 1\nI0311 12:15:21.317113 3140 log.go:172] (0xc0007f62c0) Go away received\nI0311 12:15:21.317189 3140 log.go:172] (0xc0007f62c0) (0xc000702640) Stream removed, broadcasting: 1\nI0311 12:15:21.317206 3140 log.go:172] (0xc0007f62c0) (0xc0007b4c80) Stream removed, broadcasting: 3\nI0311 12:15:21.317214 3140 log.go:172] (0xc0007f62c0) (0xc00053c000) Stream removed, broadcasting: 5\n" Mar 11 12:15:21.320: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:15:21.320: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 12:15:21.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f6q6x ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 12:15:21.494: INFO: stderr: "I0311 12:15:21.408527 3163 log.go:172] (0xc000138840) (0xc000699400) Create stream\nI0311 12:15:21.408560 3163 log.go:172] (0xc000138840) (0xc000699400) Stream added, broadcasting: 1\nI0311 12:15:21.410549 3163 log.go:172] (0xc000138840) Reply frame received for 1\nI0311 12:15:21.410586 3163 log.go:172] (0xc000138840) (0xc00071a000) Create stream\nI0311 12:15:21.410597 3163 log.go:172] (0xc000138840) (0xc00071a000) Stream added, broadcasting: 3\nI0311 12:15:21.411449 3163 log.go:172] (0xc000138840) Reply frame received for 3\nI0311 12:15:21.411470 3163 log.go:172] (0xc000138840) (0xc0006994a0) Create stream\nI0311 12:15:21.411477 3163 log.go:172] (0xc000138840) (0xc0006994a0) Stream added, broadcasting: 5\nI0311 12:15:21.412192 3163 log.go:172] (0xc000138840) Reply frame received for 5\nI0311 12:15:21.489857 3163 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:15:21.489910 3163 log.go:172] (0xc00071a000) (3) Data frame handling\nI0311 12:15:21.489924 3163 log.go:172] (0xc00071a000) (3) Data frame sent\nI0311 12:15:21.489933 3163 log.go:172] (0xc000138840) Data frame received for 3\nI0311 12:15:21.489947 3163 log.go:172] (0xc000138840) Data frame received for 5\nI0311 12:15:21.489969 3163 log.go:172] (0xc0006994a0) (5) Data frame handling\nI0311 12:15:21.489984 3163 log.go:172] (0xc00071a000) (3) Data frame handling\nI0311 12:15:21.491706 3163 log.go:172] (0xc000138840) Data frame received for 1\nI0311 12:15:21.491727 3163 log.go:172] (0xc000699400) (1) Data frame handling\nI0311 12:15:21.491744 3163 log.go:172] (0xc000699400) (1) Data frame sent\nI0311 12:15:21.491753 3163 log.go:172] (0xc000138840) (0xc000699400) Stream removed, broadcasting: 1\nI0311 12:15:21.491764 3163 log.go:172] (0xc000138840) Go away received\nI0311 12:15:21.491971 3163 log.go:172] (0xc000138840) (0xc000699400) Stream removed, broadcasting: 1\nI0311 12:15:21.491988 3163 log.go:172] (0xc000138840) (0xc00071a000) Stream removed, broadcasting: 3\nI0311 12:15:21.491996 3163 log.go:172] (0xc000138840) (0xc0006994a0) Stream removed, broadcasting: 5\n" Mar 11 12:15:21.495: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 12:15:21.495: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 12:15:21.495: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:15:21.497: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 11 12:15:31.504: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 12:15:31.504: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 11 12:15:31.504: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 11 12:15:31.531: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:31.531: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:31.531: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:31.531: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:31.531: INFO: Mar 11 12:15:31.531: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:32.536: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:32.536: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:32.536: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:32.536: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:32.536: INFO: Mar 11 12:15:32.536: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:33.552: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:33.552: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:33.552: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:33.552: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:33.552: INFO: Mar 11 12:15:33.552: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:34.561: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:34.561: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:34.561: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:34.561: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:34.561: INFO: Mar 11 12:15:34.561: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:35.565: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:35.566: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:35.566: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:35.566: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:35.566: INFO: Mar 11 12:15:35.566: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:36.570: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:36.570: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:36.570: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:36.570: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:36.570: INFO: Mar 11 12:15:36.570: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:37.575: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 12:15:37.575: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:14:49 +0000 UTC }] Mar 11 12:15:37.575: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:37.575: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 12:15:10 +0000 UTC }] Mar 11 12:15:37.575: INFO: Mar 11 12:15:37.575: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 12:15:38.579: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.933667156s Mar 11 12:15:39.583: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.929873893s Mar 11 12:15:40.586: INFO: Verifying statefulset ss doesn't scale past 0 for another 925.964551ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-f6q6x Mar 11 12:15:41.588: INFO: Scaling statefulset ss to 0 Mar 11 12:15:41.595: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 11 12:15:41.596: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f6q6x Mar 11 12:15:41.598: INFO: Scaling statefulset ss to 0 Mar 11 12:15:41.604: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:15:41.606: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:15:41.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-f6q6x" for this suite. Mar 11 12:15:47.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:15:47.683: INFO: namespace: e2e-tests-statefulset-f6q6x, resource: bindings, ignored listing per whitelist Mar 11 12:15:47.720: INFO: namespace e2e-tests-statefulset-f6q6x deletion completed in 6.095205399s • [SLOW TEST:57.931 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:15:47.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-l2kb STEP: Creating a pod to test atomic-volume-subpath Mar 11 12:15:47.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l2kb" in namespace "e2e-tests-subpath-hmnph" to be "success or failure" Mar 11 12:15:47.859: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936926ms Mar 11 12:15:49.863: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008077077s Mar 11 12:15:51.867: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 4.011848003s Mar 11 12:15:53.871: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 6.01594862s Mar 11 12:15:55.875: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 8.020146963s Mar 11 12:15:57.882: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 10.02645989s Mar 11 12:15:59.885: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 12.030257925s Mar 11 12:16:01.889: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 14.034183059s Mar 11 12:16:03.893: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 16.037905852s Mar 11 12:16:05.896: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 18.041323735s Mar 11 12:16:07.900: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 20.045090034s Mar 11 12:16:09.904: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Running", Reason="", readiness=false. Elapsed: 22.049249716s Mar 11 12:16:11.908: INFO: Pod "pod-subpath-test-downwardapi-l2kb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052979357s STEP: Saw pod success Mar 11 12:16:11.908: INFO: Pod "pod-subpath-test-downwardapi-l2kb" satisfied condition "success or failure" Mar 11 12:16:11.911: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-l2kb container test-container-subpath-downwardapi-l2kb: STEP: delete the pod Mar 11 12:16:11.946: INFO: Waiting for pod pod-subpath-test-downwardapi-l2kb to disappear Mar 11 12:16:11.957: INFO: Pod pod-subpath-test-downwardapi-l2kb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l2kb Mar 11 12:16:11.957: INFO: Deleting pod "pod-subpath-test-downwardapi-l2kb" in namespace "e2e-tests-subpath-hmnph" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:16:11.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hmnph" for this suite. Mar 11 12:16:17.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:16:18.018: INFO: namespace: e2e-tests-subpath-hmnph, resource: bindings, ignored listing per whitelist Mar 11 12:16:18.044: INFO: namespace e2e-tests-subpath-hmnph deletion completed in 6.082040441s • [SLOW TEST:30.324 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:16:18.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 11 12:16:22.693: INFO: Successfully updated pod "labelsupdate1c6d5456-6392-11ea-bacb-0242ac11000a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:16:24.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sx8rl" for this suite. Mar 11 12:16:46.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:16:46.802: INFO: namespace: e2e-tests-projected-sx8rl, resource: bindings, ignored listing per whitelist Mar 11 12:16:46.817: INFO: namespace e2e-tests-projected-sx8rl deletion completed in 22.082732931s • [SLOW TEST:28.772 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:16:46.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-bq5f2/secret-test-2d906384-6392-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 12:16:46.911: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a" in namespace "e2e-tests-secrets-bq5f2" to be "success or failure" Mar 11 12:16:46.915: INFO: Pod "pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456686ms Mar 11 12:16:48.920: INFO: Pod "pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00914426s STEP: Saw pod success Mar 11 12:16:48.920: INFO: Pod "pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:16:48.923: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a container env-test: STEP: delete the pod Mar 11 12:16:48.962: INFO: Waiting for pod pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a to disappear Mar 11 12:16:49.013: INFO: Pod pod-configmaps-2d90b891-6392-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:16:49.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bq5f2" for this suite. Mar 11 12:16:55.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:16:55.088: INFO: namespace: e2e-tests-secrets-bq5f2, resource: bindings, ignored listing per whitelist Mar 11 12:16:55.118: INFO: namespace e2e-tests-secrets-bq5f2 deletion completed in 6.101723868s • [SLOW TEST:8.301 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:16:55.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 11 12:16:55.224: INFO: Waiting up to 5m0s for pod "pod-32846393-6392-11ea-bacb-0242ac11000a" in namespace "e2e-tests-emptydir-m56tv" to be "success or failure" Mar 11 12:16:55.238: INFO: Pod "pod-32846393-6392-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.600988ms Mar 11 12:16:57.242: INFO: Pod "pod-32846393-6392-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018589814s Mar 11 12:16:59.247: INFO: Pod "pod-32846393-6392-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022816427s STEP: Saw pod success Mar 11 12:16:59.247: INFO: Pod "pod-32846393-6392-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:16:59.250: INFO: Trying to get logs from node hunter-worker pod pod-32846393-6392-11ea-bacb-0242ac11000a container test-container: STEP: delete the pod Mar 11 12:16:59.276: INFO: Waiting for pod pod-32846393-6392-11ea-bacb-0242ac11000a to disappear Mar 11 12:16:59.280: INFO: Pod pod-32846393-6392-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:16:59.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m56tv" for this suite. Mar 11 12:17:05.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:17:05.305: INFO: namespace: e2e-tests-emptydir-m56tv, resource: bindings, ignored listing per whitelist Mar 11 12:17:05.341: INFO: namespace e2e-tests-emptydir-m56tv deletion completed in 6.056816814s • [SLOW TEST:10.222 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:17:05.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-9ccv STEP: Creating a pod to test atomic-volume-subpath Mar 11 12:17:05.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9ccv" in namespace "e2e-tests-subpath-nsp9k" to be "success or failure" Mar 11 12:17:05.471: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602604ms Mar 11 12:17:07.474: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007786336s Mar 11 12:17:09.477: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 4.010816517s Mar 11 12:17:11.481: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 6.015169173s Mar 11 12:17:13.486: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 8.019513375s Mar 11 12:17:15.489: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 10.023085308s Mar 11 12:17:17.493: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 12.027075492s Mar 11 12:17:19.497: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 14.030899591s Mar 11 12:17:21.501: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 16.034989378s Mar 11 12:17:23.505: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 18.039089101s Mar 11 12:17:25.508: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 20.042064196s Mar 11 12:17:27.512: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Running", Reason="", readiness=false. Elapsed: 22.046094414s Mar 11 12:17:29.516: INFO: Pod "pod-subpath-test-projected-9ccv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.049938117s STEP: Saw pod success Mar 11 12:17:29.516: INFO: Pod "pod-subpath-test-projected-9ccv" satisfied condition "success or failure" Mar 11 12:17:29.519: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-9ccv container test-container-subpath-projected-9ccv: STEP: delete the pod Mar 11 12:17:29.558: INFO: Waiting for pod pod-subpath-test-projected-9ccv to disappear Mar 11 12:17:29.577: INFO: Pod pod-subpath-test-projected-9ccv no longer exists STEP: Deleting pod pod-subpath-test-projected-9ccv Mar 11 12:17:29.577: INFO: Deleting pod "pod-subpath-test-projected-9ccv" in namespace "e2e-tests-subpath-nsp9k" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:17:29.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-nsp9k" for this suite. Mar 11 12:17:35.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:17:35.648: INFO: namespace: e2e-tests-subpath-nsp9k, resource: bindings, ignored listing per whitelist Mar 11 12:17:35.667: INFO: namespace e2e-tests-subpath-nsp9k deletion completed in 6.081997177s • [SLOW TEST:30.326 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:17:35.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xhlfz [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 11 12:17:35.807: INFO: Found 0 stateful pods, waiting for 3 Mar 11 12:17:45.812: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:17:45.812: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:17:45.812: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 11 12:17:45.840: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 11 12:17:55.886: INFO: Updating stateful set ss2 Mar 11 12:17:55.898: INFO: Waiting for Pod e2e-tests-statefulset-xhlfz/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:18:05.906: INFO: Waiting for Pod e2e-tests-statefulset-xhlfz/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 11 12:18:16.080: INFO: Found 2 stateful pods, waiting for 3 Mar 11 12:18:26.084: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:18:26.084: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 12:18:26.084: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 11 12:18:26.102: INFO: Updating stateful set ss2 Mar 11 12:18:26.133: INFO: Waiting for Pod e2e-tests-statefulset-xhlfz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:18:36.140: INFO: Waiting for Pod e2e-tests-statefulset-xhlfz/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 12:18:46.153: INFO: Updating stateful set ss2 Mar 11 12:18:46.195: INFO: Waiting for StatefulSet e2e-tests-statefulset-xhlfz/ss2 to complete update Mar 11 12:18:46.195: INFO: Waiting for Pod e2e-tests-statefulset-xhlfz/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 11 12:18:56.203: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xhlfz Mar 11 12:18:56.206: INFO: Scaling statefulset ss2 to 0 Mar 11 12:19:06.223: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 12:19:06.226: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:19:06.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xhlfz" for this suite. Mar 11 12:19:12.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:19:12.298: INFO: namespace: e2e-tests-statefulset-xhlfz, resource: bindings, ignored listing per whitelist Mar 11 12:19:12.327: INFO: namespace e2e-tests-statefulset-xhlfz deletion completed in 6.081582762s • [SLOW TEST:96.660 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:19:12.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 11 12:19:17.452: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:19:18.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6rgbj" for this suite. Mar 11 12:19:40.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:19:40.565: INFO: namespace: e2e-tests-replicaset-6rgbj, resource: bindings, ignored listing per whitelist Mar 11 12:19:40.568: INFO: namespace e2e-tests-replicaset-6rgbj deletion completed in 22.089308047s • [SLOW TEST:28.240 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 11 12:19:40.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-952598f9-6392-11ea-bacb-0242ac11000a STEP: Creating a pod to test consume secrets Mar 11 12:19:40.718: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a" in namespace "e2e-tests-projected-rplvr" to be "success or failure" Mar 11 12:19:40.771: INFO: Pod "pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.044157ms Mar 11 12:19:42.773: INFO: Pod "pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.055561347s STEP: Saw pod success Mar 11 12:19:42.773: INFO: Pod "pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a" satisfied condition "success or failure" Mar 11 12:19:42.775: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Mar 11 12:19:42.790: INFO: Waiting for pod pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a to disappear Mar 11 12:19:42.795: INFO: Pod pod-projected-secrets-95291e2b-6392-11ea-bacb-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 11 12:19:42.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rplvr" for this suite. Mar 11 12:19:48.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:19:48.860: INFO: namespace: e2e-tests-projected-rplvr, resource: bindings, ignored listing per whitelist Mar 11 12:19:48.889: INFO: namespace e2e-tests-projected-rplvr deletion completed in 6.091242623s • [SLOW TEST:8.321 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSMar 11 12:19:48.889: INFO: Running AfterSuite actions on all nodes Mar 11 12:19:48.889: INFO: Running AfterSuite actions on node 1 Mar 11 12:19:48.889: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 5599.295 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS