I0315 22:43:37.378443 6 e2e.go:224] Starting e2e run "6881bcb9-670e-11ea-811c-0242ac110013" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584312216 - Will randomize all specs Will run 201 of 2164 specs Mar 15 22:43:37.548: INFO: >>> kubeConfig: /root/.kube/config Mar 15 22:43:37.550: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 15 22:43:37.569: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 15 22:43:37.604: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 15 22:43:37.604: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 15 22:43:37.604: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 15 22:43:37.614: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 15 22:43:37.614: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 15 22:43:37.614: INFO: e2e test version: v1.13.12 Mar 15 22:43:37.615: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:43:37.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Mar 15 22:43:37.748: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-69074761-670e-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 22:43:37.762: INFO: Waiting up to 5m0s for pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-n9pxw" to be "success or failure" Mar 15 22:43:37.766: INFO: Pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111948ms Mar 15 22:43:39.769: INFO: Pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007627409s Mar 15 22:43:41.773: INFO: Pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.010725299s Mar 15 22:43:43.886: INFO: Pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12420701s STEP: Saw pod success Mar 15 22:43:43.886: INFO: Pod "pod-secrets-6907de4b-670e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:43:43.888: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6907de4b-670e-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 22:43:44.204: INFO: Waiting for pod pod-secrets-6907de4b-670e-11ea-811c-0242ac110013 to disappear Mar 15 22:43:44.215: INFO: Pod pod-secrets-6907de4b-670e-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:43:44.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n9pxw" for this suite. Mar 15 22:43:50.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:43:50.503: INFO: namespace: e2e-tests-secrets-n9pxw, resource: bindings, ignored listing per whitelist Mar 15 22:43:50.529: INFO: namespace e2e-tests-secrets-n9pxw deletion completed in 6.310729825s • [SLOW TEST:12.914 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:43:50.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 15 22:43:50.623: INFO: Waiting up to 5m0s for pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013" in namespace "e2e-tests-var-expansion-7jqlz" to be "success or failure" Mar 15 22:43:50.640: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.964681ms Mar 15 22:43:52.643: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019597711s Mar 15 22:43:55.212: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.588913134s Mar 15 22:43:57.220: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596940696s Mar 15 22:43:59.335: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.711386436s STEP: Saw pod success Mar 15 22:43:59.335: INFO: Pod "var-expansion-70b1c457-670e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:43:59.342: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-70b1c457-670e-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 15 22:44:01.268: INFO: Waiting for pod var-expansion-70b1c457-670e-11ea-811c-0242ac110013 to disappear Mar 15 22:44:01.503: INFO: Pod var-expansion-70b1c457-670e-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:44:01.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7jqlz" for this suite. Mar 15 22:44:09.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:44:09.669: INFO: namespace: e2e-tests-var-expansion-7jqlz, resource: bindings, ignored listing per whitelist Mar 15 22:44:09.680: INFO: namespace e2e-tests-var-expansion-7jqlz deletion completed in 8.173125012s • [SLOW TEST:19.151 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:44:09.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7c1fedce-670e-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 22:44:09.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-xrtb4" to be "success or failure" Mar 15 22:44:09.815: INFO: Pod "pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958123ms Mar 15 22:44:11.862: INFO: Pod "pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050069125s Mar 15 22:44:13.865: INFO: Pod "pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053125853s STEP: Saw pod success Mar 15 22:44:13.865: INFO: Pod "pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:44:13.867: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 15 22:44:13.900: INFO: Waiting for pod pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013 to disappear Mar 15 22:44:13.981: INFO: Pod pod-configmaps-7c2222ac-670e-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:44:13.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xrtb4" for this suite. Mar 15 22:44:19.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:44:20.046: INFO: namespace: e2e-tests-configmap-xrtb4, resource: bindings, ignored listing per whitelist Mar 15 22:44:20.060: INFO: namespace e2e-tests-configmap-xrtb4 deletion completed in 6.075799175s • [SLOW TEST:10.380 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:44:20.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 15 22:44:20.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41085,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 22:44:20.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41085,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 15 22:44:30.151: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41104,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 15 22:44:30.151: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41104,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 15 22:44:40.159: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41124,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 22:44:40.159: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41124,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 15 22:44:50.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41144,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 22:44:50.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-a,UID:824be174-670e-11ea-99e8-0242ac110002,ResourceVersion:41144,Generation:0,CreationTimestamp:2020-03-15 22:44:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 15 22:45:00.186: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-b,UID:9a296aa1-670e-11ea-99e8-0242ac110002,ResourceVersion:41162,Generation:0,CreationTimestamp:2020-03-15 22:45:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 22:45:00.186: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-b,UID:9a296aa1-670e-11ea-99e8-0242ac110002,ResourceVersion:41162,Generation:0,CreationTimestamp:2020-03-15 22:45:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 15 22:45:10.191: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-b,UID:9a296aa1-670e-11ea-99e8-0242ac110002,ResourceVersion:41181,Generation:0,CreationTimestamp:2020-03-15 22:45:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 22:45:10.191: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nrbx9,SelfLink:/api/v1/namespaces/e2e-tests-watch-nrbx9/configmaps/e2e-watch-test-configmap-b,UID:9a296aa1-670e-11ea-99e8-0242ac110002,ResourceVersion:41181,Generation:0,CreationTimestamp:2020-03-15 22:45:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:45:20.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nrbx9" for this suite. Mar 15 22:45:26.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:45:26.655: INFO: namespace: e2e-tests-watch-nrbx9, resource: bindings, ignored listing per whitelist Mar 15 22:45:26.716: INFO: namespace e2e-tests-watch-nrbx9 deletion completed in 6.52188442s • [SLOW TEST:66.656 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:45:26.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 15 22:45:26.855: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9rf5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rf5c/configmaps/e2e-watch-test-resource-version,UID:aa0588ff-670e-11ea-99e8-0242ac110002,ResourceVersion:41224,Generation:0,CreationTimestamp:2020-03-15 22:45:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 22:45:26.855: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9rf5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-9rf5c/configmaps/e2e-watch-test-resource-version,UID:aa0588ff-670e-11ea-99e8-0242ac110002,ResourceVersion:41225,Generation:0,CreationTimestamp:2020-03-15 22:45:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:45:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9rf5c" for this suite. Mar 15 22:45:32.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:45:32.881: INFO: namespace: e2e-tests-watch-9rf5c, resource: bindings, ignored listing per whitelist Mar 15 22:45:32.927: INFO: namespace e2e-tests-watch-9rf5c deletion completed in 6.067338349s • [SLOW TEST:6.210 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:45:32.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 22:45:33.072: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 15 22:45:33.098: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:33.100: INFO: Number of nodes with available pods: 0 Mar 15 22:45:33.100: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:34.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:35.015: INFO: Number of nodes with available pods: 0 Mar 15 22:45:35.015: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:35.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:35.566: INFO: Number of nodes with available pods: 0 Mar 15 22:45:35.566: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:36.105: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:36.110: INFO: Number of nodes with available pods: 0 Mar 15 22:45:36.110: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:37.392: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:37.395: INFO: Number of nodes with available pods: 0 Mar 15 22:45:37.395: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:38.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:38.107: INFO: Number of nodes with available pods: 0 Mar 15 22:45:38.107: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:39.104: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:39.107: INFO: Number of nodes with available pods: 0 Mar 15 22:45:39.107: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:40.103: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:40.106: INFO: Number of nodes with available pods: 0 Mar 15 22:45:40.106: INFO: Node hunter-worker is running more than one daemon pod Mar 15 22:45:41.256: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:41.261: INFO: Number of nodes with available pods: 2 Mar 15 22:45:41.261: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 15 22:45:42.577: INFO: Wrong image for pod: daemon-set-ct9lk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:42.577: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:42.615: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:43.620: INFO: Wrong image for pod: daemon-set-ct9lk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:43.620: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:43.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:44.672: INFO: Wrong image for pod: daemon-set-ct9lk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:44.672: INFO: Pod daemon-set-ct9lk is not available Mar 15 22:45:44.672: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:44.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:45.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:45.619: INFO: Pod daemon-set-w6sbz is not available Mar 15 22:45:45.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:46.639: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:46.639: INFO: Pod daemon-set-w6sbz is not available Mar 15 22:45:46.781: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:47.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:47.619: INFO: Pod daemon-set-w6sbz is not available Mar 15 22:45:47.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:48.645: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:48.649: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:49.618: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:49.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:50.620: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:50.620: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:50.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:51.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:51.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:51.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:52.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:52.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:52.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:53.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:53.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:53.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:54.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:54.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:54.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:55.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:55.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:55.624: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:56.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:56.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:56.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:57.864: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:57.864: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:57.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:58.618: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:58.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:58.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:45:59.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:45:59.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:45:59.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:00.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:46:00.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:46:00.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:01.619: INFO: Wrong image for pod: daemon-set-nmpdj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 15 22:46:01.619: INFO: Pod daemon-set-nmpdj is not available Mar 15 22:46:01.622: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:02.620: INFO: Pod daemon-set-wmg4c is not available Mar 15 22:46:02.623: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 15 22:46:02.626: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:02.629: INFO: Number of nodes with available pods: 1 Mar 15 22:46:02.629: INFO: Node hunter-worker2 is running more than one daemon pod Mar 15 22:46:03.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:03.638: INFO: Number of nodes with available pods: 1 Mar 15 22:46:03.638: INFO: Node hunter-worker2 is running more than one daemon pod Mar 15 22:46:04.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:04.638: INFO: Number of nodes with available pods: 1 Mar 15 22:46:04.638: INFO: Node hunter-worker2 is running more than one daemon pod Mar 15 22:46:05.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:05.635: INFO: Number of nodes with available pods: 1 Mar 15 22:46:05.635: INFO: Node hunter-worker2 is running more than one daemon pod Mar 15 22:46:06.634: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 15 22:46:06.637: INFO: Number of nodes with available pods: 2 Mar 15 22:46:06.637: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9j86m, will wait for the garbage collector to delete the pods Mar 15 22:46:06.710: INFO: Deleting DaemonSet.extensions daemon-set took: 6.066165ms Mar 15 22:46:06.811: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.289078ms Mar 15 22:46:21.914: INFO: Number of nodes with available pods: 0 Mar 15 22:46:21.914: INFO: Number of running nodes: 0, number of available pods: 0 Mar 15 22:46:21.916: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9j86m/daemonsets","resourceVersion":"41410"},"items":null} Mar 15 22:46:21.919: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9j86m/pods","resourceVersion":"41410"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:46:21.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9j86m" for this suite. Mar 15 22:46:30.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:46:30.094: INFO: namespace: e2e-tests-daemonsets-9j86m, resource: bindings, ignored listing per whitelist Mar 15 22:46:30.128: INFO: namespace e2e-tests-daemonsets-9j86m deletion completed in 8.198023326s • [SLOW TEST:57.202 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:46:30.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 22:46:30.324: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:46:31.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-jkqf6" for this suite. Mar 15 22:46:37.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:46:37.932: INFO: namespace: e2e-tests-custom-resource-definition-jkqf6, resource: bindings, ignored listing per whitelist Mar 15 22:46:37.986: INFO: namespace e2e-tests-custom-resource-definition-jkqf6 deletion completed in 6.166091024s • [SLOW TEST:7.857 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:46:37.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 22:46:38.148: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:46:45.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-n4scb" for this suite. Mar 15 22:46:51.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:46:51.449: INFO: namespace: e2e-tests-init-container-n4scb, resource: bindings, ignored listing per whitelist Mar 15 22:46:51.455: INFO: namespace e2e-tests-init-container-n4scb deletion completed in 6.092191179s • [SLOW TEST:13.469 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:46:51.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 15 22:46:51.606: INFO: Waiting up to 5m0s for pod "client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013" in namespace "e2e-tests-containers-p6ws5" to be "success or failure" Mar 15 22:46:51.628: INFO: Pod "client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 21.107076ms Mar 15 22:46:53.649: INFO: Pod "client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042485727s Mar 15 22:46:55.652: INFO: Pod "client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045631891s STEP: Saw pod success Mar 15 22:46:55.652: INFO: Pod "client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:46:55.655: INFO: Trying to get logs from node hunter-worker2 pod client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 22:46:55.934: INFO: Waiting for pod client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013 to disappear Mar 15 22:46:55.945: INFO: Pod client-containers-dc8fa2d8-670e-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:46:55.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-p6ws5" for this suite. Mar 15 22:47:01.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:47:02.019: INFO: namespace: e2e-tests-containers-p6ws5, resource: bindings, ignored listing per whitelist Mar 15 22:47:02.025: INFO: namespace e2e-tests-containers-p6ws5 deletion completed in 6.076896402s • [SLOW TEST:10.569 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:47:02.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:47:02.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-s97mf" for this suite. Mar 15 22:47:08.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:47:08.436: INFO: namespace: e2e-tests-kubelet-test-s97mf, resource: bindings, ignored listing per whitelist Mar 15 22:47:08.445: INFO: namespace e2e-tests-kubelet-test-s97mf deletion completed in 6.086950746s • [SLOW TEST:6.420 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:47:08.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 22:47:08.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-v554k' Mar 15 22:47:12.743: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 22:47:12.743: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 15 22:47:14.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-v554k' Mar 15 22:47:15.061: INFO: stderr: "" Mar 15 22:47:15.061: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:47:15.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v554k" for this suite. Mar 15 22:47:21.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:47:21.320: INFO: namespace: e2e-tests-kubectl-v554k, resource: bindings, ignored listing per whitelist Mar 15 22:47:21.380: INFO: namespace e2e-tests-kubectl-v554k deletion completed in 6.237391946s • [SLOW TEST:12.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:47:21.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 22:47:21.472: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 22:47:21.487: INFO: Waiting for terminating namespaces to be deleted... Mar 15 22:47:21.490: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 22:47:21.495: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.495: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 22:47:21.495: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.495: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 22:47:21.495: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.495: INFO: Container coredns ready: true, restart count 0 Mar 15 22:47:21.495: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 22:47:21.500: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.500: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 22:47:21.500: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.500: INFO: Container coredns ready: true, restart count 0 Mar 15 22:47:21.500: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 22:47:21.500: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f0cc3fdb-670e-11ea-811c-0242ac110013 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f0cc3fdb-670e-11ea-811c-0242ac110013 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f0cc3fdb-670e-11ea-811c-0242ac110013 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:47:32.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-txn6c" for this suite. Mar 15 22:47:46.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:47:46.196: INFO: namespace: e2e-tests-sched-pred-txn6c, resource: bindings, ignored listing per whitelist Mar 15 22:47:46.235: INFO: namespace e2e-tests-sched-pred-txn6c deletion completed in 14.103138483s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:24.855 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:47:46.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fd37b7fe-670e-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 22:47:46.390: INFO: Waiting up to 5m0s for pod "pod-secrets-fd397124-670e-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-8p5hr" to be "success or failure" Mar 15 22:47:46.405: INFO: Pod "pod-secrets-fd397124-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 15.266397ms Mar 15 22:47:48.409: INFO: Pod "pod-secrets-fd397124-670e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018847274s Mar 15 22:47:50.413: INFO: Pod "pod-secrets-fd397124-670e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022562674s STEP: Saw pod success Mar 15 22:47:50.413: INFO: Pod "pod-secrets-fd397124-670e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:47:50.416: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fd397124-670e-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 22:47:50.434: INFO: Waiting for pod pod-secrets-fd397124-670e-11ea-811c-0242ac110013 to disappear Mar 15 22:47:50.450: INFO: Pod pod-secrets-fd397124-670e-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:47:50.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8p5hr" for this suite. Mar 15 22:47:56.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:47:56.521: INFO: namespace: e2e-tests-secrets-8p5hr, resource: bindings, ignored listing per whitelist Mar 15 22:47:56.554: INFO: namespace e2e-tests-secrets-8p5hr deletion completed in 6.100924203s • [SLOW TEST:10.318 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:47:56.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 22:47:56.842: INFO: Waiting up to 5m0s for pod "downward-api-036314c6-670f-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-npbxg" to be "success or failure" Mar 15 22:47:56.844: INFO: Pod "downward-api-036314c6-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678645ms Mar 15 22:47:58.848: INFO: Pod "downward-api-036314c6-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006693726s Mar 15 22:48:00.853: INFO: Pod "downward-api-036314c6-670f-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221112s STEP: Saw pod success Mar 15 22:48:00.853: INFO: Pod "downward-api-036314c6-670f-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:48:00.856: INFO: Trying to get logs from node hunter-worker2 pod downward-api-036314c6-670f-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 15 22:48:00.957: INFO: Waiting for pod downward-api-036314c6-670f-11ea-811c-0242ac110013 to disappear Mar 15 22:48:00.977: INFO: Pod downward-api-036314c6-670f-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:48:00.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-npbxg" for this suite. Mar 15 22:48:07.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:48:07.089: INFO: namespace: e2e-tests-downward-api-npbxg, resource: bindings, ignored listing per whitelist Mar 15 22:48:07.106: INFO: namespace e2e-tests-downward-api-npbxg deletion completed in 6.125848011s • [SLOW TEST:10.552 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:48:07.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:48:07.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9m6hn" for this suite. Mar 15 22:48:13.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:48:13.242: INFO: namespace: e2e-tests-services-9m6hn, resource: bindings, ignored listing per whitelist Mar 15 22:48:13.304: INFO: namespace e2e-tests-services-9m6hn deletion completed in 6.089450206s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.198 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:48:13.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 22:48:13.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5cxc5' Mar 15 22:48:13.496: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 22:48:13.496: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 15 22:48:13.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-5cxc5' Mar 15 22:48:13.680: INFO: stderr: "" Mar 15 22:48:13.680: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:48:13.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5cxc5" for this suite. Mar 15 22:48:19.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:48:19.761: INFO: namespace: e2e-tests-kubectl-5cxc5, resource: bindings, ignored listing per whitelist Mar 15 22:48:19.781: INFO: namespace e2e-tests-kubectl-5cxc5 deletion completed in 6.0986342s • [SLOW TEST:6.477 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:48:19.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1152ec21-670f-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 22:48:20.321: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-p8vq9" to be "success or failure" Mar 15 22:48:20.384: INFO: Pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 62.149581ms Mar 15 22:48:22.628: INFO: Pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307015782s Mar 15 22:48:24.632: INFO: Pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310612122s Mar 15 22:48:26.674: INFO: Pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.353100579s STEP: Saw pod success Mar 15 22:48:26.675: INFO: Pod "pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:48:26.678: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 22:48:26.774: INFO: Waiting for pod pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013 to disappear Mar 15 22:48:26.830: INFO: Pod pod-projected-configmaps-117329fb-670f-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:48:26.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p8vq9" for this suite. Mar 15 22:48:32.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:48:32.915: INFO: namespace: e2e-tests-projected-p8vq9, resource: bindings, ignored listing per whitelist Mar 15 22:48:32.920: INFO: namespace e2e-tests-projected-p8vq9 deletion completed in 6.08644671s • [SLOW TEST:13.139 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:48:32.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bl7ws in namespace e2e-tests-proxy-hqlpb I0315 22:48:33.282028 6 runners.go:184] Created replication controller with name: proxy-service-bl7ws, namespace: e2e-tests-proxy-hqlpb, replica count: 1 I0315 22:48:34.332515 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 22:48:35.332773 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 22:48:36.332995 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0315 22:48:37.333332 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0315 22:48:38.333562 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0315 22:48:39.333759 6 runners.go:184] proxy-service-bl7ws Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 15 22:48:39.336: INFO: setup took 6.218553416s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 15 22:48:39.343: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hqlpb/pods/http:proxy-service-bl7ws-x4zxm:162/proxy/: bar (200; 7.19042ms) Mar 15 22:48:39.343: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hqlpb/pods/http:proxy-service-bl7ws-x4zxm:160/proxy/: foo (200; 7.221739ms) Mar 15 22:48:39.343: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hqlpb/pods/proxy-service-bl7ws-x4zxm:160/proxy/: foo (200; 7.176535ms) Mar 15 22:48:39.344: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hqlpb/pods/proxy-service-bl7ws-x4zxm/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 15 22:48:57.537: INFO: Waiting up to 5m0s for pod "pod-27a2202b-670f-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-dmggw" to be "success or failure" Mar 15 22:48:57.541: INFO: Pod "pod-27a2202b-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040494ms Mar 15 22:48:59.545: INFO: Pod "pod-27a2202b-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007772039s Mar 15 22:49:01.548: INFO: Pod "pod-27a2202b-670f-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011168677s STEP: Saw pod success Mar 15 22:49:01.548: INFO: Pod "pod-27a2202b-670f-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:49:01.551: INFO: Trying to get logs from node hunter-worker2 pod pod-27a2202b-670f-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 22:49:01.586: INFO: Waiting for pod pod-27a2202b-670f-11ea-811c-0242ac110013 to disappear Mar 15 22:49:01.601: INFO: Pod pod-27a2202b-670f-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:49:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dmggw" for this suite. Mar 15 22:49:07.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:49:07.659: INFO: namespace: e2e-tests-emptydir-dmggw, resource: bindings, ignored listing per whitelist Mar 15 22:49:07.691: INFO: namespace e2e-tests-emptydir-dmggw deletion completed in 6.086449784s • [SLOW TEST:10.281 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:49:07.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-fmdnq/secret-test-2dc24e48-670f-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 22:49:07.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-fmdnq" to be "success or failure" Mar 15 22:49:07.858: INFO: Pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 25.782066ms Mar 15 22:49:10.100: INFO: Pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26751945s Mar 15 22:49:12.112: INFO: Pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279537014s Mar 15 22:49:14.160: INFO: Pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.327306046s STEP: Saw pod success Mar 15 22:49:14.160: INFO: Pod "pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:49:14.163: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013 container env-test: STEP: delete the pod Mar 15 22:49:14.397: INFO: Waiting for pod pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013 to disappear Mar 15 22:49:14.573: INFO: Pod pod-configmaps-2dc405c3-670f-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:49:14.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fmdnq" for this suite. Mar 15 22:49:20.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:49:20.672: INFO: namespace: e2e-tests-secrets-fmdnq, resource: bindings, ignored listing per whitelist Mar 15 22:49:20.693: INFO: namespace e2e-tests-secrets-fmdnq deletion completed in 6.116042872s • [SLOW TEST:13.001 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:49:20.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zfz6r Mar 15 22:49:24.825: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zfz6r STEP: checking the pod's current state and verifying that restartCount is present Mar 15 22:49:24.828: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:53:26.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-zfz6r" for this suite. Mar 15 22:53:32.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:53:32.228: INFO: namespace: e2e-tests-container-probe-zfz6r, resource: bindings, ignored listing per whitelist Mar 15 22:53:32.276: INFO: namespace e2e-tests-container-probe-zfz6r deletion completed in 6.170232605s • [SLOW TEST:251.583 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:53:32.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-ncph STEP: Creating a pod to test atomic-volume-subpath Mar 15 22:53:32.390: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ncph" in namespace "e2e-tests-subpath-zpns6" to be "success or failure" Mar 15 22:53:32.419: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Pending", Reason="", readiness=false. Elapsed: 28.806397ms Mar 15 22:53:34.422: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032488846s Mar 15 22:53:36.499: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109566039s Mar 15 22:53:38.503: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113488605s Mar 15 22:53:40.508: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 8.118172213s Mar 15 22:53:42.512: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 10.12227434s Mar 15 22:53:44.517: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 12.126988123s Mar 15 22:53:46.521: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 14.131554649s Mar 15 22:53:48.526: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 16.135928035s Mar 15 22:53:50.530: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 18.139799667s Mar 15 22:53:52.534: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 20.144226116s Mar 15 22:53:54.539: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 22.148629235s Mar 15 22:53:56.543: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Running", Reason="", readiness=false. Elapsed: 24.15322089s Mar 15 22:53:58.548: INFO: Pod "pod-subpath-test-secret-ncph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.157702841s STEP: Saw pod success Mar 15 22:53:58.548: INFO: Pod "pod-subpath-test-secret-ncph" satisfied condition "success or failure" Mar 15 22:53:58.551: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-ncph container test-container-subpath-secret-ncph: STEP: delete the pod Mar 15 22:53:58.654: INFO: Waiting for pod pod-subpath-test-secret-ncph to disappear Mar 15 22:53:58.674: INFO: Pod pod-subpath-test-secret-ncph no longer exists STEP: Deleting pod pod-subpath-test-secret-ncph Mar 15 22:53:58.674: INFO: Deleting pod "pod-subpath-test-secret-ncph" in namespace "e2e-tests-subpath-zpns6" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:53:58.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zpns6" for this suite. Mar 15 22:54:04.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:54:04.737: INFO: namespace: e2e-tests-subpath-zpns6, resource: bindings, ignored listing per whitelist Mar 15 22:54:04.769: INFO: namespace e2e-tests-subpath-zpns6 deletion completed in 6.090853416s • [SLOW TEST:32.493 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:54:04.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0315 22:54:45.187332 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 22:54:45.187: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:54:45.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dls6d" for this suite. Mar 15 22:54:55.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:54:55.228: INFO: namespace: e2e-tests-gc-dls6d, resource: bindings, ignored listing per whitelist Mar 15 22:54:55.283: INFO: namespace e2e-tests-gc-dls6d deletion completed in 10.092315953s • [SLOW TEST:50.513 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:54:55.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 22:54:55.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:54:55.485: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 22:54:55.485: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 15 22:54:55.497: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 15 22:54:55.520: INFO: scanned /root for discovery docs: Mar 15 22:54:55.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:11.524: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 15 22:55:11.524: INFO: stdout: "Created e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d\nScaling up e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 15 22:55:11.524: INFO: stdout: "Created e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d\nScaling up e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 15 22:55:11.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:11.624: INFO: stderr: "" Mar 15 22:55:11.624: INFO: stdout: "e2e-test-nginx-rc-6hmqt e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d-xgp86 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Mar 15 22:55:16.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:16.761: INFO: stderr: "" Mar 15 22:55:16.761: INFO: stdout: "e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d-xgp86 " Mar 15 22:55:16.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d-xgp86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:16.860: INFO: stderr: "" Mar 15 22:55:16.860: INFO: stdout: "true" Mar 15 22:55:16.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d-xgp86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:16.957: INFO: stderr: "" Mar 15 22:55:16.957: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 15 22:55:16.957: INFO: e2e-test-nginx-rc-cfe60a453e6b2d93d63cfba354a72d4d-xgp86 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 15 22:55:16.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rqkq8' Mar 15 22:55:17.091: INFO: stderr: "" Mar 15 22:55:17.092: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:55:17.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rqkq8" for this suite. Mar 15 22:55:23.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:55:23.215: INFO: namespace: e2e-tests-kubectl-rqkq8, resource: bindings, ignored listing per whitelist Mar 15 22:55:23.219: INFO: namespace e2e-tests-kubectl-rqkq8 deletion completed in 6.124498535s • [SLOW TEST:27.937 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:55:23.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 15 22:55:23.344: INFO: Waiting up to 5m0s for pod "pod-0d96e1f2-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-gzvng" to be "success or failure" Mar 15 22:55:23.464: INFO: Pod "pod-0d96e1f2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 119.836008ms Mar 15 22:55:25.468: INFO: Pod "pod-0d96e1f2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123612983s Mar 15 22:55:27.472: INFO: Pod "pod-0d96e1f2-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127441916s STEP: Saw pod success Mar 15 22:55:27.472: INFO: Pod "pod-0d96e1f2-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:55:27.475: INFO: Trying to get logs from node hunter-worker2 pod pod-0d96e1f2-6710-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 22:55:27.665: INFO: Waiting for pod pod-0d96e1f2-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:55:27.690: INFO: Pod pod-0d96e1f2-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:55:27.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gzvng" for this suite. Mar 15 22:55:33.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:55:33.775: INFO: namespace: e2e-tests-emptydir-gzvng, resource: bindings, ignored listing per whitelist Mar 15 22:55:33.800: INFO: namespace e2e-tests-emptydir-gzvng deletion completed in 6.106611278s • [SLOW TEST:10.580 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:55:33.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-zpqzx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zpqzx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.50.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.50.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.50.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.50.197_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-zpqzx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-zpqzx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-zpqzx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zpqzx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.50.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.50.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.50.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.50.197_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 15 22:55:54.103: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.105: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.116: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.119: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.142: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.144: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.147: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.149: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.151: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.154: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.156: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.158: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:54.173: INFO: Lookups using e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-zpqzx jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc] Mar 15 22:55:59.359: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.362: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.373: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.376: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.399: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.401: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.403: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.405: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.407: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.414: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.418: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:55:59.474: INFO: Lookups using e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-zpqzx jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc] Mar 15 22:56:04.177: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.180: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.192: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.195: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.217: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.222: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.225: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.228: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.230: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.232: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.235: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:04.252: INFO: Lookups using e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-zpqzx jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc] Mar 15 22:56:09.178: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.182: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.195: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.198: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.294: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.297: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.299: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.302: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.304: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.306: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc from pod e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013: the server could not find the requested resource (get pods dns-test-13e8848e-6710-11ea-811c-0242ac110013) Mar 15 22:56:09.344: INFO: Lookups using e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-zpqzx jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx jessie_udp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@dns-test-service.e2e-tests-dns-zpqzx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-zpqzx.svc] Mar 15 22:56:14.262: INFO: DNS probes using e2e-tests-dns-zpqzx/dns-test-13e8848e-6710-11ea-811c-0242ac110013 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:56:14.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-zpqzx" for this suite. Mar 15 22:56:22.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:56:22.918: INFO: namespace: e2e-tests-dns-zpqzx, resource: bindings, ignored listing per whitelist Mar 15 22:56:23.004: INFO: namespace e2e-tests-dns-zpqzx deletion completed in 8.123580768s • [SLOW TEST:49.204 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:56:23.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3139e46c-6710-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 22:56:23.135: INFO: Waiting up to 5m0s for pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-pgjc5" to be "success or failure" Mar 15 22:56:23.150: INFO: Pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 15.021042ms Mar 15 22:56:25.227: INFO: Pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091272232s Mar 15 22:56:27.230: INFO: Pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094767358s Mar 15 22:56:29.234: INFO: Pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098507167s STEP: Saw pod success Mar 15 22:56:29.234: INFO: Pod "pod-secrets-313aa415-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:56:29.236: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-313aa415-6710-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 22:56:29.324: INFO: Waiting for pod pod-secrets-313aa415-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:56:29.332: INFO: Pod pod-secrets-313aa415-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:56:29.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pgjc5" for this suite. Mar 15 22:56:35.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:56:35.379: INFO: namespace: e2e-tests-secrets-pgjc5, resource: bindings, ignored listing per whitelist Mar 15 22:56:35.445: INFO: namespace e2e-tests-secrets-pgjc5 deletion completed in 6.110137255s • [SLOW TEST:12.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:56:35.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 15 22:56:35.541: INFO: Waiting up to 5m0s for pod "pod-389f741a-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-lpqs2" to be "success or failure" Mar 15 22:56:35.566: INFO: Pod "pod-389f741a-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 24.818909ms Mar 15 22:56:37.659: INFO: Pod "pod-389f741a-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117961056s Mar 15 22:56:39.663: INFO: Pod "pod-389f741a-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12196059s STEP: Saw pod success Mar 15 22:56:39.663: INFO: Pod "pod-389f741a-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:56:39.666: INFO: Trying to get logs from node hunter-worker pod pod-389f741a-6710-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 22:56:39.698: INFO: Waiting for pod pod-389f741a-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:56:39.714: INFO: Pod pod-389f741a-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:56:39.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lpqs2" for this suite. Mar 15 22:56:45.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:56:45.835: INFO: namespace: e2e-tests-emptydir-lpqs2, resource: bindings, ignored listing per whitelist Mar 15 22:56:45.843: INFO: namespace e2e-tests-emptydir-lpqs2 deletion completed in 6.125171825s • [SLOW TEST:10.396 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:56:45.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 15 22:56:46.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-dsnjr run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 15 22:56:49.593: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0315 22:56:49.520318 287 log.go:172] (0xc0007340b0) (0xc0006a9400) Create stream\nI0315 22:56:49.520371 287 log.go:172] (0xc0007340b0) (0xc0006a9400) Stream added, broadcasting: 1\nI0315 22:56:49.522776 287 log.go:172] (0xc0007340b0) Reply frame received for 1\nI0315 22:56:49.522834 287 log.go:172] (0xc0007340b0) (0xc000750000) Create stream\nI0315 22:56:49.522855 287 log.go:172] (0xc0007340b0) (0xc000750000) Stream added, broadcasting: 3\nI0315 22:56:49.523675 287 log.go:172] (0xc0007340b0) Reply frame received for 3\nI0315 22:56:49.523754 287 log.go:172] (0xc0007340b0) (0xc000520000) Create stream\nI0315 22:56:49.523781 287 log.go:172] (0xc0007340b0) (0xc000520000) Stream added, broadcasting: 5\nI0315 22:56:49.524509 287 log.go:172] (0xc0007340b0) Reply frame received for 5\nI0315 22:56:49.524534 287 log.go:172] (0xc0007340b0) (0xc000750140) Create stream\nI0315 22:56:49.524542 287 log.go:172] (0xc0007340b0) (0xc000750140) Stream added, broadcasting: 7\nI0315 22:56:49.525457 287 log.go:172] (0xc0007340b0) Reply frame received for 7\nI0315 22:56:49.525601 287 log.go:172] (0xc000750000) (3) Writing data frame\nI0315 22:56:49.525720 287 log.go:172] (0xc000750000) (3) Writing data frame\nI0315 22:56:49.526518 287 log.go:172] (0xc0007340b0) Data frame received for 5\nI0315 22:56:49.526541 287 log.go:172] (0xc000520000) (5) Data frame handling\nI0315 22:56:49.526567 287 log.go:172] (0xc000520000) (5) Data frame sent\nI0315 22:56:49.526928 287 log.go:172] (0xc0007340b0) Data frame received for 5\nI0315 22:56:49.526943 287 log.go:172] (0xc000520000) (5) Data frame handling\nI0315 22:56:49.526956 287 log.go:172] (0xc000520000) (5) Data frame sent\nI0315 22:56:49.568613 287 log.go:172] (0xc0007340b0) Data frame received for 5\nI0315 22:56:49.568647 287 log.go:172] (0xc000520000) (5) Data frame handling\nI0315 22:56:49.569048 287 log.go:172] (0xc0007340b0) Data frame received for 7\nI0315 22:56:49.569075 287 log.go:172] (0xc000750140) (7) Data frame handling\nI0315 22:56:49.569276 287 log.go:172] (0xc0007340b0) Data frame received for 1\nI0315 22:56:49.569306 287 log.go:172] (0xc0006a9400) (1) Data frame handling\nI0315 22:56:49.569351 287 log.go:172] (0xc0006a9400) (1) Data frame sent\nI0315 22:56:49.569394 287 log.go:172] (0xc0007340b0) (0xc000750000) Stream removed, broadcasting: 3\nI0315 22:56:49.569427 287 log.go:172] (0xc0007340b0) (0xc0006a9400) Stream removed, broadcasting: 1\nI0315 22:56:49.569533 287 log.go:172] (0xc0007340b0) (0xc0006a9400) Stream removed, broadcasting: 1\nI0315 22:56:49.569563 287 log.go:172] (0xc0007340b0) (0xc000750000) Stream removed, broadcasting: 3\nI0315 22:56:49.569582 287 log.go:172] (0xc0007340b0) (0xc000520000) Stream removed, broadcasting: 5\nI0315 22:56:49.569885 287 log.go:172] (0xc0007340b0) Go away received\nI0315 22:56:49.569937 287 log.go:172] (0xc0007340b0) (0xc000750140) Stream removed, broadcasting: 7\n" Mar 15 22:56:49.594: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:56:51.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dsnjr" for this suite. Mar 15 22:57:05.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:57:05.667: INFO: namespace: e2e-tests-kubectl-dsnjr, resource: bindings, ignored listing per whitelist Mar 15 22:57:05.720: INFO: namespace e2e-tests-kubectl-dsnjr deletion completed in 14.091900904s • [SLOW TEST:19.876 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:57:05.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 22:57:10.530: INFO: Successfully updated pod "labelsupdate4aad2f85-6710-11ea-811c-0242ac110013" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:57:12.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vkqkf" for this suite. Mar 15 22:57:34.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:57:34.606: INFO: namespace: e2e-tests-downward-api-vkqkf, resource: bindings, ignored listing per whitelist Mar 15 22:57:34.649: INFO: namespace e2e-tests-downward-api-vkqkf deletion completed in 22.086066184s • [SLOW TEST:28.929 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:57:34.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-5c0c9123-6710-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 22:57:35.005: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-rnw4n" to be "success or failure" Mar 15 22:57:35.016: INFO: Pod "pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 11.270168ms Mar 15 22:57:37.020: INFO: Pod "pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015379975s Mar 15 22:57:39.026: INFO: Pod "pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021392435s STEP: Saw pod success Mar 15 22:57:39.026: INFO: Pod "pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:57:39.030: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 22:57:39.047: INFO: Waiting for pod pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:57:39.058: INFO: Pod pod-projected-configmaps-5c115e24-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:57:39.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rnw4n" for this suite. Mar 15 22:57:45.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:57:45.127: INFO: namespace: e2e-tests-projected-rnw4n, resource: bindings, ignored listing per whitelist Mar 15 22:57:45.171: INFO: namespace e2e-tests-projected-rnw4n deletion completed in 6.098941208s • [SLOW TEST:10.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:57:45.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:57:46.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-n9x5l" to be "success or failure" Mar 15 22:57:46.725: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 232.914779ms Mar 15 22:57:48.747: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255290621s Mar 15 22:57:50.916: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424596058s Mar 15 22:57:53.064: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57236454s Mar 15 22:57:55.068: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.576032707s STEP: Saw pod success Mar 15 22:57:55.068: INFO: Pod "downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:57:55.071: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 22:57:55.133: INFO: Waiting for pod downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:57:55.165: INFO: Pod downwardapi-volume-62d12501-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:57:55.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n9x5l" for this suite. Mar 15 22:58:03.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:58:03.384: INFO: namespace: e2e-tests-projected-n9x5l, resource: bindings, ignored listing per whitelist Mar 15 22:58:03.439: INFO: namespace e2e-tests-projected-n9x5l deletion completed in 8.269835845s • [SLOW TEST:18.268 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:58:03.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 22:58:04.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-tlbm7" to be "success or failure" Mar 15 22:58:04.569: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 149.400484ms Mar 15 22:58:06.629: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209442232s Mar 15 22:58:08.744: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324061145s Mar 15 22:58:10.755: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334684112s Mar 15 22:58:12.759: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.338711931s STEP: Saw pod success Mar 15 22:58:12.759: INFO: Pod "downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 22:58:12.762: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 22:58:13.189: INFO: Waiting for pod downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013 to disappear Mar 15 22:58:13.238: INFO: Pod downwardapi-volume-6d9999d2-6710-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:58:13.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tlbm7" for this suite. Mar 15 22:58:19.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:58:19.310: INFO: namespace: e2e-tests-downward-api-tlbm7, resource: bindings, ignored listing per whitelist Mar 15 22:58:19.353: INFO: namespace e2e-tests-downward-api-tlbm7 deletion completed in 6.112173013s • [SLOW TEST:15.914 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:58:19.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 15 22:58:20.059: INFO: Waiting up to 5m0s for pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b" in namespace "e2e-tests-svcaccounts-sqd8r" to be "success or failure" Mar 15 22:58:20.064: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112547ms Mar 15 22:58:22.067: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006990255s Mar 15 22:58:24.098: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038899271s Mar 15 22:58:26.102: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042011331s STEP: Saw pod success Mar 15 22:58:26.102: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b" satisfied condition "success or failure" Mar 15 22:58:26.104: INFO: Trying to get logs from node hunter-worker pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b container token-test: STEP: delete the pod Mar 15 22:58:26.163: INFO: Waiting for pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b to disappear Mar 15 22:58:26.172: INFO: Pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-nt47b no longer exists STEP: Creating a pod to test consume service account root CA Mar 15 22:58:26.175: INFO: Waiting up to 5m0s for pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk" in namespace "e2e-tests-svcaccounts-sqd8r" to be "success or failure" Mar 15 22:58:26.178: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42841ms Mar 15 22:58:28.216: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041170531s Mar 15 22:58:30.276: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100799585s Mar 15 22:58:32.474: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298897426s Mar 15 22:58:34.478: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302990419s Mar 15 22:58:36.481: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.3061941s Mar 15 22:58:38.485: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.310063312s STEP: Saw pod success Mar 15 22:58:38.485: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk" satisfied condition "success or failure" Mar 15 22:58:38.488: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk container root-ca-test: STEP: delete the pod Mar 15 22:58:38.596: INFO: Waiting for pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk to disappear Mar 15 22:58:38.640: INFO: Pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-jw5dk no longer exists STEP: Creating a pod to test consume service account namespace Mar 15 22:58:38.645: INFO: Waiting up to 5m0s for pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz" in namespace "e2e-tests-svcaccounts-sqd8r" to be "success or failure" Mar 15 22:58:38.785: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 140.689375ms Mar 15 22:58:40.789: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144541952s Mar 15 22:58:42.793: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148429628s Mar 15 22:58:45.106: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461815712s Mar 15 22:58:47.347: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702331165s Mar 15 22:58:49.449: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.804927883s Mar 15 22:58:51.453: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.808443289s Mar 15 22:58:53.457: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.812529076s Mar 15 22:58:55.460: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.815111579s Mar 15 22:58:57.463: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Running", Reason="", readiness=false. Elapsed: 18.818802088s Mar 15 22:58:59.695: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.050686975s STEP: Saw pod success Mar 15 22:58:59.695: INFO: Pod "pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz" satisfied condition "success or failure" Mar 15 22:58:59.697: INFO: Trying to get logs from node hunter-worker pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz container namespace-test: STEP: delete the pod Mar 15 22:58:59.928: INFO: Waiting for pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz to disappear Mar 15 22:58:59.977: INFO: Pod pod-service-account-76ec46bc-6710-11ea-811c-0242ac110013-lktlz no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 22:58:59.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-sqd8r" for this suite. Mar 15 22:59:08.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 22:59:08.205: INFO: namespace: e2e-tests-svcaccounts-sqd8r, resource: bindings, ignored listing per whitelist Mar 15 22:59:08.238: INFO: namespace e2e-tests-svcaccounts-sqd8r deletion completed in 8.259161821s • [SLOW TEST:48.885 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 22:59:08.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-93c94dad-6710-11ea-811c-0242ac110013 STEP: Creating configMap with name cm-test-opt-upd-93c94e39-6710-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-93c94dad-6710-11ea-811c-0242ac110013 STEP: Updating configmap cm-test-opt-upd-93c94e39-6710-11ea-811c-0242ac110013 STEP: Creating configMap with name cm-test-opt-create-93c94e67-6710-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:00:23.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7t8dx" for this suite. Mar 15 23:00:45.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:00:45.823: INFO: namespace: e2e-tests-projected-7t8dx, resource: bindings, ignored listing per whitelist Mar 15 23:00:45.867: INFO: namespace e2e-tests-projected-7t8dx deletion completed in 22.106739681s • [SLOW TEST:97.628 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:00:45.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-l7wmb Mar 15 23:00:51.982: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-l7wmb STEP: checking the pod's current state and verifying that restartCount is present Mar 15 23:00:51.985: INFO: Initial restart count of pod liveness-http is 0 Mar 15 23:01:12.026: INFO: Restart count of pod e2e-tests-container-probe-l7wmb/liveness-http is now 1 (20.040805797s elapsed) Mar 15 23:01:32.079: INFO: Restart count of pod e2e-tests-container-probe-l7wmb/liveness-http is now 2 (40.094168904s elapsed) Mar 15 23:01:52.133: INFO: Restart count of pod e2e-tests-container-probe-l7wmb/liveness-http is now 3 (1m0.14857046s elapsed) Mar 15 23:02:12.179: INFO: Restart count of pod e2e-tests-container-probe-l7wmb/liveness-http is now 4 (1m20.194002789s elapsed) Mar 15 23:03:18.506: INFO: Restart count of pod e2e-tests-container-probe-l7wmb/liveness-http is now 5 (2m26.521021114s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:03:18.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-l7wmb" for this suite. Mar 15 23:03:24.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:03:24.682: INFO: namespace: e2e-tests-container-probe-l7wmb, resource: bindings, ignored listing per whitelist Mar 15 23:03:24.691: INFO: namespace e2e-tests-container-probe-l7wmb deletion completed in 6.148074399s • [SLOW TEST:158.824 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:03:24.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-8jgm STEP: Creating a pod to test atomic-volume-subpath Mar 15 23:03:24.838: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8jgm" in namespace "e2e-tests-subpath-k72bz" to be "success or failure" Mar 15 23:03:24.842: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477532ms Mar 15 23:03:26.902: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063764069s Mar 15 23:03:28.906: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068057098s Mar 15 23:03:30.910: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072349418s Mar 15 23:03:32.915: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 8.076885682s Mar 15 23:03:34.919: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 10.080898485s Mar 15 23:03:36.923: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 12.085221449s Mar 15 23:03:38.928: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 14.089733374s Mar 15 23:03:40.932: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 16.093938189s Mar 15 23:03:42.936: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 18.098132231s Mar 15 23:03:44.940: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 20.102197897s Mar 15 23:03:46.944: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 22.105878003s Mar 15 23:03:48.948: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 24.110313442s Mar 15 23:03:50.952: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Running", Reason="", readiness=false. Elapsed: 26.114543845s Mar 15 23:03:52.957: INFO: Pod "pod-subpath-test-downwardapi-8jgm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.11866375s STEP: Saw pod success Mar 15 23:03:52.957: INFO: Pod "pod-subpath-test-downwardapi-8jgm" satisfied condition "success or failure" Mar 15 23:03:52.960: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-8jgm container test-container-subpath-downwardapi-8jgm: STEP: delete the pod Mar 15 23:03:52.992: INFO: Waiting for pod pod-subpath-test-downwardapi-8jgm to disappear Mar 15 23:03:52.998: INFO: Pod pod-subpath-test-downwardapi-8jgm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8jgm Mar 15 23:03:52.998: INFO: Deleting pod "pod-subpath-test-downwardapi-8jgm" in namespace "e2e-tests-subpath-k72bz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:03:53.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-k72bz" for this suite. Mar 15 23:03:59.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:03:59.107: INFO: namespace: e2e-tests-subpath-k72bz, resource: bindings, ignored listing per whitelist Mar 15 23:03:59.115: INFO: namespace e2e-tests-subpath-k72bz deletion completed in 6.09877621s • [SLOW TEST:34.424 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:03:59.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-555m4 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-555m4 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-555m4 Mar 15 23:03:59.396: INFO: Found 0 stateful pods, waiting for 1 Mar 15 23:04:09.401: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 15 23:04:09.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:04:09.840: INFO: stderr: "I0315 23:04:09.538080 313 log.go:172] (0xc000138160) (0xc0005fe780) Create stream\nI0315 23:04:09.538147 313 log.go:172] (0xc000138160) (0xc0005fe780) Stream added, broadcasting: 1\nI0315 23:04:09.543070 313 log.go:172] (0xc000138160) Reply frame received for 1\nI0315 23:04:09.543146 313 log.go:172] (0xc000138160) (0xc00023ebe0) Create stream\nI0315 23:04:09.543170 313 log.go:172] (0xc000138160) (0xc00023ebe0) Stream added, broadcasting: 3\nI0315 23:04:09.544996 313 log.go:172] (0xc000138160) Reply frame received for 3\nI0315 23:04:09.545050 313 log.go:172] (0xc000138160) (0xc00023ed20) Create stream\nI0315 23:04:09.545067 313 log.go:172] (0xc000138160) (0xc00023ed20) Stream added, broadcasting: 5\nI0315 23:04:09.546084 313 log.go:172] (0xc000138160) Reply frame received for 5\nI0315 23:04:09.833542 313 log.go:172] (0xc000138160) Data frame received for 3\nI0315 23:04:09.833578 313 log.go:172] (0xc00023ebe0) (3) Data frame handling\nI0315 23:04:09.833605 313 log.go:172] (0xc00023ebe0) (3) Data frame sent\nI0315 23:04:09.833895 313 log.go:172] (0xc000138160) Data frame received for 3\nI0315 23:04:09.833933 313 log.go:172] (0xc00023ebe0) (3) Data frame handling\nI0315 23:04:09.833975 313 log.go:172] (0xc000138160) Data frame received for 5\nI0315 23:04:09.833994 313 log.go:172] (0xc00023ed20) (5) Data frame handling\nI0315 23:04:09.835996 313 log.go:172] (0xc000138160) Data frame received for 1\nI0315 23:04:09.836025 313 log.go:172] (0xc0005fe780) (1) Data frame handling\nI0315 23:04:09.836038 313 log.go:172] (0xc0005fe780) (1) Data frame sent\nI0315 23:04:09.836064 313 log.go:172] (0xc000138160) (0xc0005fe780) Stream removed, broadcasting: 1\nI0315 23:04:09.836110 313 log.go:172] (0xc000138160) Go away received\nI0315 23:04:09.836427 313 log.go:172] (0xc000138160) (0xc0005fe780) Stream removed, broadcasting: 1\nI0315 23:04:09.836468 313 log.go:172] (0xc000138160) (0xc00023ebe0) Stream removed, broadcasting: 3\nI0315 23:04:09.836514 313 log.go:172] (0xc000138160) (0xc00023ed20) Stream removed, broadcasting: 5\n" Mar 15 23:04:09.840: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:04:09.840: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:04:09.843: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 15 23:04:20.331: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:04:20.331: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:04:20.348: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:20.348: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC }] Mar 15 23:04:20.348: INFO: Mar 15 23:04:20.348: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 15 23:04:21.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992795979s Mar 15 23:04:22.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98766894s Mar 15 23:04:23.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98341946s Mar 15 23:04:24.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.883387583s Mar 15 23:04:25.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.878684351s Mar 15 23:04:26.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.873659602s Mar 15 23:04:27.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.860519075s Mar 15 23:04:28.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.85543333s Mar 15 23:04:29.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 849.760421ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-555m4 Mar 15 23:04:30.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:04:30.714: INFO: stderr: "I0315 23:04:30.614079 336 log.go:172] (0xc000154840) (0xc000700640) Create stream\nI0315 23:04:30.614143 336 log.go:172] (0xc000154840) (0xc000700640) Stream added, broadcasting: 1\nI0315 23:04:30.616202 336 log.go:172] (0xc000154840) Reply frame received for 1\nI0315 23:04:30.616237 336 log.go:172] (0xc000154840) (0xc0007006e0) Create stream\nI0315 23:04:30.616246 336 log.go:172] (0xc000154840) (0xc0007006e0) Stream added, broadcasting: 3\nI0315 23:04:30.617013 336 log.go:172] (0xc000154840) Reply frame received for 3\nI0315 23:04:30.617047 336 log.go:172] (0xc000154840) (0xc000692c80) Create stream\nI0315 23:04:30.617062 336 log.go:172] (0xc000154840) (0xc000692c80) Stream added, broadcasting: 5\nI0315 23:04:30.617938 336 log.go:172] (0xc000154840) Reply frame received for 5\nI0315 23:04:30.709927 336 log.go:172] (0xc000154840) Data frame received for 5\nI0315 23:04:30.709971 336 log.go:172] (0xc000692c80) (5) Data frame handling\nI0315 23:04:30.709997 336 log.go:172] (0xc000154840) Data frame received for 3\nI0315 23:04:30.710008 336 log.go:172] (0xc0007006e0) (3) Data frame handling\nI0315 23:04:30.710023 336 log.go:172] (0xc0007006e0) (3) Data frame sent\nI0315 23:04:30.710042 336 log.go:172] (0xc000154840) Data frame received for 3\nI0315 23:04:30.710054 336 log.go:172] (0xc0007006e0) (3) Data frame handling\nI0315 23:04:30.711448 336 log.go:172] (0xc000154840) Data frame received for 1\nI0315 23:04:30.711491 336 log.go:172] (0xc000700640) (1) Data frame handling\nI0315 23:04:30.711505 336 log.go:172] (0xc000700640) (1) Data frame sent\nI0315 23:04:30.711564 336 log.go:172] (0xc000154840) (0xc000700640) Stream removed, broadcasting: 1\nI0315 23:04:30.711603 336 log.go:172] (0xc000154840) Go away received\nI0315 23:04:30.711844 336 log.go:172] (0xc000154840) (0xc000700640) Stream removed, broadcasting: 1\nI0315 23:04:30.711870 336 log.go:172] (0xc000154840) (0xc0007006e0) Stream removed, broadcasting: 3\nI0315 23:04:30.711881 336 log.go:172] (0xc000154840) (0xc000692c80) Stream removed, broadcasting: 5\n" Mar 15 23:04:30.715: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:04:30.715: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:04:30.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:04:30.930: INFO: stderr: "I0315 23:04:30.850050 359 log.go:172] (0xc000138840) (0xc0007d75e0) Create stream\nI0315 23:04:30.850095 359 log.go:172] (0xc000138840) (0xc0007d75e0) Stream added, broadcasting: 1\nI0315 23:04:30.855133 359 log.go:172] (0xc000138840) Reply frame received for 1\nI0315 23:04:30.855184 359 log.go:172] (0xc000138840) (0xc0006a4000) Create stream\nI0315 23:04:30.855208 359 log.go:172] (0xc000138840) (0xc0006a4000) Stream added, broadcasting: 3\nI0315 23:04:30.856152 359 log.go:172] (0xc000138840) Reply frame received for 3\nI0315 23:04:30.856177 359 log.go:172] (0xc000138840) (0xc0007d7680) Create stream\nI0315 23:04:30.856183 359 log.go:172] (0xc000138840) (0xc0007d7680) Stream added, broadcasting: 5\nI0315 23:04:30.857276 359 log.go:172] (0xc000138840) Reply frame received for 5\nI0315 23:04:30.923983 359 log.go:172] (0xc000138840) Data frame received for 5\nI0315 23:04:30.924037 359 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:30.924074 359 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0315 23:04:30.924216 359 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0315 23:04:30.924243 359 log.go:172] (0xc0007d7680) (5) Data frame handling\nI0315 23:04:30.924288 359 log.go:172] (0xc0007d7680) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0315 23:04:30.924313 359 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:30.924345 359 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0315 23:04:30.924383 359 log.go:172] (0xc000138840) Data frame received for 5\nI0315 23:04:30.924403 359 log.go:172] (0xc0007d7680) (5) Data frame handling\nI0315 23:04:30.925744 359 log.go:172] (0xc000138840) Data frame received for 1\nI0315 23:04:30.925859 359 log.go:172] (0xc0007d75e0) (1) Data frame handling\nI0315 23:04:30.925900 359 log.go:172] (0xc0007d75e0) (1) Data frame sent\nI0315 23:04:30.925978 359 log.go:172] (0xc000138840) (0xc0007d75e0) Stream removed, broadcasting: 1\nI0315 23:04:30.926027 359 log.go:172] (0xc000138840) Go away received\nI0315 23:04:30.926237 359 log.go:172] (0xc000138840) (0xc0007d75e0) Stream removed, broadcasting: 1\nI0315 23:04:30.926266 359 log.go:172] (0xc000138840) (0xc0006a4000) Stream removed, broadcasting: 3\nI0315 23:04:30.926285 359 log.go:172] (0xc000138840) (0xc0007d7680) Stream removed, broadcasting: 5\n" Mar 15 23:04:30.930: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:04:30.930: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:04:30.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:04:31.127: INFO: stderr: "I0315 23:04:31.059669 382 log.go:172] (0xc000138840) (0xc000533360) Create stream\nI0315 23:04:31.059730 382 log.go:172] (0xc000138840) (0xc000533360) Stream added, broadcasting: 1\nI0315 23:04:31.062423 382 log.go:172] (0xc000138840) Reply frame received for 1\nI0315 23:04:31.062492 382 log.go:172] (0xc000138840) (0xc00040c000) Create stream\nI0315 23:04:31.062507 382 log.go:172] (0xc000138840) (0xc00040c000) Stream added, broadcasting: 3\nI0315 23:04:31.063803 382 log.go:172] (0xc000138840) Reply frame received for 3\nI0315 23:04:31.063832 382 log.go:172] (0xc000138840) (0xc00040c0a0) Create stream\nI0315 23:04:31.063844 382 log.go:172] (0xc000138840) (0xc00040c0a0) Stream added, broadcasting: 5\nI0315 23:04:31.064876 382 log.go:172] (0xc000138840) Reply frame received for 5\nI0315 23:04:31.121769 382 log.go:172] (0xc000138840) Data frame received for 5\nI0315 23:04:31.121794 382 log.go:172] (0xc00040c0a0) (5) Data frame handling\nI0315 23:04:31.121815 382 log.go:172] (0xc00040c0a0) (5) Data frame sent\nI0315 23:04:31.121825 382 log.go:172] (0xc000138840) Data frame received for 5\nI0315 23:04:31.121833 382 log.go:172] (0xc00040c0a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0315 23:04:31.122157 382 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:31.122187 382 log.go:172] (0xc00040c000) (3) Data frame handling\nI0315 23:04:31.122210 382 log.go:172] (0xc00040c000) (3) Data frame sent\nI0315 23:04:31.122224 382 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:31.122237 382 log.go:172] (0xc00040c000) (3) Data frame handling\nI0315 23:04:31.123771 382 log.go:172] (0xc000138840) Data frame received for 1\nI0315 23:04:31.123788 382 log.go:172] (0xc000533360) (1) Data frame handling\nI0315 23:04:31.123801 382 log.go:172] (0xc000533360) (1) Data frame sent\nI0315 23:04:31.123822 382 log.go:172] (0xc000138840) (0xc000533360) Stream removed, broadcasting: 1\nI0315 23:04:31.124012 382 log.go:172] (0xc000138840) (0xc000533360) Stream removed, broadcasting: 1\nI0315 23:04:31.124035 382 log.go:172] (0xc000138840) (0xc00040c000) Stream removed, broadcasting: 3\nI0315 23:04:31.124123 382 log.go:172] (0xc000138840) Go away received\nI0315 23:04:31.124196 382 log.go:172] (0xc000138840) (0xc00040c0a0) Stream removed, broadcasting: 5\n" Mar 15 23:04:31.128: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:04:31.128: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:04:31.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 15 23:04:41.137: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:04:41.137: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:04:41.137: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 15 23:04:41.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:04:41.339: INFO: stderr: "I0315 23:04:41.261703 405 log.go:172] (0xc000138840) (0xc0006872c0) Create stream\nI0315 23:04:41.261765 405 log.go:172] (0xc000138840) (0xc0006872c0) Stream added, broadcasting: 1\nI0315 23:04:41.263845 405 log.go:172] (0xc000138840) Reply frame received for 1\nI0315 23:04:41.263894 405 log.go:172] (0xc000138840) (0xc000742000) Create stream\nI0315 23:04:41.263906 405 log.go:172] (0xc000138840) (0xc000742000) Stream added, broadcasting: 3\nI0315 23:04:41.264710 405 log.go:172] (0xc000138840) Reply frame received for 3\nI0315 23:04:41.264756 405 log.go:172] (0xc000138840) (0xc000742140) Create stream\nI0315 23:04:41.264771 405 log.go:172] (0xc000138840) (0xc000742140) Stream added, broadcasting: 5\nI0315 23:04:41.265694 405 log.go:172] (0xc000138840) Reply frame received for 5\nI0315 23:04:41.334777 405 log.go:172] (0xc000138840) Data frame received for 5\nI0315 23:04:41.334800 405 log.go:172] (0xc000742140) (5) Data frame handling\nI0315 23:04:41.334827 405 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:41.334834 405 log.go:172] (0xc000742000) (3) Data frame handling\nI0315 23:04:41.334842 405 log.go:172] (0xc000742000) (3) Data frame sent\nI0315 23:04:41.334848 405 log.go:172] (0xc000138840) Data frame received for 3\nI0315 23:04:41.334853 405 log.go:172] (0xc000742000) (3) Data frame handling\nI0315 23:04:41.335928 405 log.go:172] (0xc000138840) Data frame received for 1\nI0315 23:04:41.335951 405 log.go:172] (0xc0006872c0) (1) Data frame handling\nI0315 23:04:41.335965 405 log.go:172] (0xc0006872c0) (1) Data frame sent\nI0315 23:04:41.335978 405 log.go:172] (0xc000138840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0315 23:04:41.335992 405 log.go:172] (0xc000138840) Go away received\nI0315 23:04:41.336260 405 log.go:172] (0xc000138840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0315 23:04:41.336291 405 log.go:172] (0xc000138840) (0xc000742000) Stream removed, broadcasting: 3\nI0315 23:04:41.336306 405 log.go:172] (0xc000138840) (0xc000742140) Stream removed, broadcasting: 5\n" Mar 15 23:04:41.339: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:04:41.339: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:04:41.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:04:41.692: INFO: stderr: "I0315 23:04:41.466773 427 log.go:172] (0xc0007e8420) (0xc0005ef360) Create stream\nI0315 23:04:41.466834 427 log.go:172] (0xc0007e8420) (0xc0005ef360) Stream added, broadcasting: 1\nI0315 23:04:41.469284 427 log.go:172] (0xc0007e8420) Reply frame received for 1\nI0315 23:04:41.469325 427 log.go:172] (0xc0007e8420) (0xc0002f6000) Create stream\nI0315 23:04:41.469336 427 log.go:172] (0xc0007e8420) (0xc0002f6000) Stream added, broadcasting: 3\nI0315 23:04:41.470373 427 log.go:172] (0xc0007e8420) Reply frame received for 3\nI0315 23:04:41.470415 427 log.go:172] (0xc0007e8420) (0xc0002f60a0) Create stream\nI0315 23:04:41.470425 427 log.go:172] (0xc0007e8420) (0xc0002f60a0) Stream added, broadcasting: 5\nI0315 23:04:41.471319 427 log.go:172] (0xc0007e8420) Reply frame received for 5\nI0315 23:04:41.685055 427 log.go:172] (0xc0007e8420) Data frame received for 3\nI0315 23:04:41.685244 427 log.go:172] (0xc0002f6000) (3) Data frame handling\nI0315 23:04:41.685279 427 log.go:172] (0xc0002f6000) (3) Data frame sent\nI0315 23:04:41.685299 427 log.go:172] (0xc0007e8420) Data frame received for 3\nI0315 23:04:41.685315 427 log.go:172] (0xc0002f6000) (3) Data frame handling\nI0315 23:04:41.685461 427 log.go:172] (0xc0007e8420) Data frame received for 5\nI0315 23:04:41.685506 427 log.go:172] (0xc0002f60a0) (5) Data frame handling\nI0315 23:04:41.687982 427 log.go:172] (0xc0007e8420) Data frame received for 1\nI0315 23:04:41.688009 427 log.go:172] (0xc0005ef360) (1) Data frame handling\nI0315 23:04:41.688094 427 log.go:172] (0xc0005ef360) (1) Data frame sent\nI0315 23:04:41.688116 427 log.go:172] (0xc0007e8420) (0xc0005ef360) Stream removed, broadcasting: 1\nI0315 23:04:41.688140 427 log.go:172] (0xc0007e8420) Go away received\nI0315 23:04:41.688372 427 log.go:172] (0xc0007e8420) (0xc0005ef360) Stream removed, broadcasting: 1\nI0315 23:04:41.688427 427 log.go:172] (0xc0007e8420) (0xc0002f6000) Stream removed, broadcasting: 3\nI0315 23:04:41.688445 427 log.go:172] (0xc0007e8420) (0xc0002f60a0) Stream removed, broadcasting: 5\n" Mar 15 23:04:41.692: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:04:41.692: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:04:41.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-555m4 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:04:41.967: INFO: stderr: "I0315 23:04:41.797047 449 log.go:172] (0xc000138630) (0xc00071c640) Create stream\nI0315 23:04:41.797236 449 log.go:172] (0xc000138630) (0xc00071c640) Stream added, broadcasting: 1\nI0315 23:04:41.799139 449 log.go:172] (0xc000138630) Reply frame received for 1\nI0315 23:04:41.799176 449 log.go:172] (0xc000138630) (0xc0006b8d20) Create stream\nI0315 23:04:41.799190 449 log.go:172] (0xc000138630) (0xc0006b8d20) Stream added, broadcasting: 3\nI0315 23:04:41.799943 449 log.go:172] (0xc000138630) Reply frame received for 3\nI0315 23:04:41.799980 449 log.go:172] (0xc000138630) (0xc000636000) Create stream\nI0315 23:04:41.799997 449 log.go:172] (0xc000138630) (0xc000636000) Stream added, broadcasting: 5\nI0315 23:04:41.800619 449 log.go:172] (0xc000138630) Reply frame received for 5\nI0315 23:04:41.961941 449 log.go:172] (0xc000138630) Data frame received for 3\nI0315 23:04:41.961974 449 log.go:172] (0xc0006b8d20) (3) Data frame handling\nI0315 23:04:41.961995 449 log.go:172] (0xc0006b8d20) (3) Data frame sent\nI0315 23:04:41.962009 449 log.go:172] (0xc000138630) Data frame received for 3\nI0315 23:04:41.962028 449 log.go:172] (0xc0006b8d20) (3) Data frame handling\nI0315 23:04:41.962103 449 log.go:172] (0xc000138630) Data frame received for 5\nI0315 23:04:41.962122 449 log.go:172] (0xc000636000) (5) Data frame handling\nI0315 23:04:41.963717 449 log.go:172] (0xc000138630) Data frame received for 1\nI0315 23:04:41.963753 449 log.go:172] (0xc00071c640) (1) Data frame handling\nI0315 23:04:41.963771 449 log.go:172] (0xc00071c640) (1) Data frame sent\nI0315 23:04:41.963799 449 log.go:172] (0xc000138630) (0xc00071c640) Stream removed, broadcasting: 1\nI0315 23:04:41.964099 449 log.go:172] (0xc000138630) Go away received\nI0315 23:04:41.964175 449 log.go:172] (0xc000138630) (0xc00071c640) Stream removed, broadcasting: 1\nI0315 23:04:41.964199 449 log.go:172] (0xc000138630) (0xc0006b8d20) Stream removed, broadcasting: 3\nI0315 23:04:41.964210 449 log.go:172] (0xc000138630) (0xc000636000) Stream removed, broadcasting: 5\n" Mar 15 23:04:41.967: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:04:41.967: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:04:41.967: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:04:41.971: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 15 23:04:51.977: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:04:51.977: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:04:51.977: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:04:52.198: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:52.198: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC }] Mar 15 23:04:52.198: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:52.198: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC }] Mar 15 23:04:52.198: INFO: Mar 15 23:04:52.198: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 23:04:53.209: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:53.209: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC }] Mar 15 23:04:53.209: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:53.209: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC }] Mar 15 23:04:53.209: INFO: Mar 15 23:04:53.209: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 23:04:54.456: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:54.456: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC }] Mar 15 23:04:54.456: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:54.457: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC }] Mar 15 23:04:54.457: INFO: Mar 15 23:04:54.457: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 23:04:55.462: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:55.462: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:03:59 +0000 UTC }] Mar 15 23:04:55.462: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:55.462: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC }] Mar 15 23:04:55.462: INFO: Mar 15 23:04:55.462: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 15 23:04:56.466: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:56.466: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:56.466: INFO: Mar 15 23:04:56.466: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 15 23:04:57.470: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:57.470: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:57.470: INFO: Mar 15 23:04:57.470: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 15 23:04:58.551: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:58.551: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:58.551: INFO: Mar 15 23:04:58.551: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 15 23:04:59.556: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:04:59.556: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:04:59.557: INFO: Mar 15 23:04:59.557: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 15 23:05:00.611: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:05:00.611: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:05:00.611: INFO: Mar 15 23:05:00.611: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 15 23:05:01.616: INFO: POD NODE PHASE GRACE CONDITIONS Mar 15 23:05:01.616: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:04:20 +0000 UTC }] Mar 15 23:05:01.616: INFO: Mar 15 23:05:01.616: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-555m4 Mar 15 23:05:02.701: INFO: Scaling statefulset ss to 0 Mar 15 23:05:02.710: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 23:05:02.712: INFO: Deleting all statefulset in ns e2e-tests-statefulset-555m4 Mar 15 23:05:02.714: INFO: Scaling statefulset ss to 0 Mar 15 23:05:02.721: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:05:02.723: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:05:02.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-555m4" for this suite. Mar 15 23:05:17.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:05:17.588: INFO: namespace: e2e-tests-statefulset-555m4, resource: bindings, ignored listing per whitelist Mar 15 23:05:17.644: INFO: namespace e2e-tests-statefulset-555m4 deletion completed in 14.877450663s • [SLOW TEST:78.529 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:05:17.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 23:05:18.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 23:05:18.215: INFO: Waiting for terminating namespaces to be deleted... Mar 15 23:05:18.217: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 23:05:18.222: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.222: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 23:05:18.222: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.222: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 23:05:18.222: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.222: INFO: Container coredns ready: true, restart count 0 Mar 15 23:05:18.222: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 23:05:18.448: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.448: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 23:05:18.448: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.448: INFO: Container coredns ready: true, restart count 0 Mar 15 23:05:18.448: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:05:18.448: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 15 23:05:18.980: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Mar 15 23:05:18.980: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Mar 15 23:05:18.980: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Mar 15 23:05:18.980: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Mar 15 23:05:18.980: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Mar 15 23:05:18.980: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-709f23fe-6711-11ea-811c-0242ac110013.15fc9cc060fe45af], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-q6xlc/filler-pod-709f23fe-6711-11ea-811c-0242ac110013 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-709f23fe-6711-11ea-811c-0242ac110013.15fc9cc0bfa94775], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-709f23fe-6711-11ea-811c-0242ac110013.15fc9cc11fbe05af], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-709f23fe-6711-11ea-811c-0242ac110013.15fc9cc13c409b4b], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-709fc341-6711-11ea-811c-0242ac110013.15fc9cc068c3e77d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-q6xlc/filler-pod-709fc341-6711-11ea-811c-0242ac110013 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-709fc341-6711-11ea-811c-0242ac110013.15fc9cc0c9c40a48], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-709fc341-6711-11ea-811c-0242ac110013.15fc9cc121e78568], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-709fc341-6711-11ea-811c-0242ac110013.15fc9cc13c3f9245], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fc9cc1581ea1db], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:05:25.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-q6xlc" for this suite. Mar 15 23:05:31.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:05:31.399: INFO: namespace: e2e-tests-sched-pred-q6xlc, resource: bindings, ignored listing per whitelist Mar 15 23:05:31.462: INFO: namespace e2e-tests-sched-pred-q6xlc deletion completed in 6.29173008s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.818 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:05:31.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:05:39.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-j99cg" for this suite. Mar 15 23:05:49.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:05:49.723: INFO: namespace: e2e-tests-kubelet-test-j99cg, resource: bindings, ignored listing per whitelist Mar 15 23:05:49.879: INFO: namespace e2e-tests-kubelet-test-j99cg deletion completed in 10.217666147s • [SLOW TEST:18.417 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:05:49.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:05:50.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-7l7jv" to be "success or failure" Mar 15 23:05:50.511: INFO: Pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 160.863508ms Mar 15 23:05:52.514: INFO: Pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164177096s Mar 15 23:05:54.518: INFO: Pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167865554s Mar 15 23:05:56.521: INFO: Pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171624904s STEP: Saw pod success Mar 15 23:05:56.522: INFO: Pod "downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:05:56.524: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:05:56.560: INFO: Waiting for pod downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013 to disappear Mar 15 23:05:56.586: INFO: Pod downwardapi-volume-834dd625-6711-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:05:56.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7l7jv" for this suite. Mar 15 23:06:02.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:06:02.682: INFO: namespace: e2e-tests-projected-7l7jv, resource: bindings, ignored listing per whitelist Mar 15 23:06:02.737: INFO: namespace e2e-tests-projected-7l7jv deletion completed in 6.148097439s • [SLOW TEST:12.858 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:06:02.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 23:06:10.072: INFO: Successfully updated pod "labelsupdate8b294004-6711-11ea-811c-0242ac110013" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:06:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pqzkj" for this suite. Mar 15 23:06:34.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:06:34.365: INFO: namespace: e2e-tests-projected-pqzkj, resource: bindings, ignored listing per whitelist Mar 15 23:06:34.412: INFO: namespace e2e-tests-projected-pqzkj deletion completed in 22.234012342s • [SLOW TEST:31.675 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:06:34.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 15 23:06:46.798: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:46.900: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:48.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:48.904: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:50.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:50.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:52.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:52.910: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:54.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:54.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:56.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:56.904: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:06:58.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:06:58.904: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:00.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:00.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:02.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:02.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:04.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:04.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:06.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:06.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:08.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:08.904: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:10.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:10.905: INFO: Pod pod-with-prestop-exec-hook still exists Mar 15 23:07:12.900: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 15 23:07:12.904: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:07:12.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mnxrj" for this suite. Mar 15 23:07:36.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:07:36.974: INFO: namespace: e2e-tests-container-lifecycle-hook-mnxrj, resource: bindings, ignored listing per whitelist Mar 15 23:07:37.005: INFO: namespace e2e-tests-container-lifecycle-hook-mnxrj deletion completed in 24.092064123s • [SLOW TEST:62.592 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:07:37.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c2f4b686-6711-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c2f4b686-6711-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:08:52.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ltfh6" for this suite. Mar 15 23:09:16.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:09:16.453: INFO: namespace: e2e-tests-projected-ltfh6, resource: bindings, ignored listing per whitelist Mar 15 23:09:16.461: INFO: namespace e2e-tests-projected-ltfh6 deletion completed in 24.115816667s • [SLOW TEST:99.456 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:09:16.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 23:09:16.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-786rz' Mar 15 23:09:19.548: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 15 23:09:19.548: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 15 23:09:19.633: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j6qzr] Mar 15 23:09:19.633: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j6qzr" in namespace "e2e-tests-kubectl-786rz" to be "running and ready" Mar 15 23:09:19.635: INFO: Pod "e2e-test-nginx-rc-j6qzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483062ms Mar 15 23:09:21.708: INFO: Pod "e2e-test-nginx-rc-j6qzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074583616s Mar 15 23:09:23.711: INFO: Pod "e2e-test-nginx-rc-j6qzr": Phase="Running", Reason="", readiness=true. Elapsed: 4.078217346s Mar 15 23:09:23.711: INFO: Pod "e2e-test-nginx-rc-j6qzr" satisfied condition "running and ready" Mar 15 23:09:23.711: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j6qzr] Mar 15 23:09:23.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-786rz' Mar 15 23:09:23.852: INFO: stderr: "" Mar 15 23:09:23.852: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 15 23:09:23.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-786rz' Mar 15 23:09:23.969: INFO: stderr: "" Mar 15 23:09:23.969: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:09:23.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-786rz" for this suite. Mar 15 23:09:46.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:09:46.057: INFO: namespace: e2e-tests-kubectl-786rz, resource: bindings, ignored listing per whitelist Mar 15 23:09:46.089: INFO: namespace e2e-tests-kubectl-786rz deletion completed in 22.117991019s • [SLOW TEST:29.628 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:09:46.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 15 23:09:46.396: INFO: Waiting up to 5m0s for pod "pod-1002a367-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-9bxwf" to be "success or failure" Mar 15 23:09:46.454: INFO: Pod "pod-1002a367-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 58.336772ms Mar 15 23:09:48.458: INFO: Pod "pod-1002a367-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061786993s Mar 15 23:09:50.462: INFO: Pod "pod-1002a367-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065590948s STEP: Saw pod success Mar 15 23:09:50.462: INFO: Pod "pod-1002a367-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:09:50.464: INFO: Trying to get logs from node hunter-worker pod pod-1002a367-6712-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 23:09:50.526: INFO: Waiting for pod pod-1002a367-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:09:50.531: INFO: Pod pod-1002a367-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:09:50.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9bxwf" for this suite. Mar 15 23:09:56.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:09:56.603: INFO: namespace: e2e-tests-emptydir-9bxwf, resource: bindings, ignored listing per whitelist Mar 15 23:09:56.644: INFO: namespace e2e-tests-emptydir-9bxwf deletion completed in 6.110098349s • [SLOW TEST:10.554 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:09:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:09:56.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 15 23:09:57.066: INFO: stderr: "" Mar 15 23:09:57.066: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T22:39:48Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:09:57.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hpg2g" for this suite. Mar 15 23:10:03.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:10:03.245: INFO: namespace: e2e-tests-kubectl-hpg2g, resource: bindings, ignored listing per whitelist Mar 15 23:10:03.252: INFO: namespace e2e-tests-kubectl-hpg2g deletion completed in 6.181857227s • [SLOW TEST:6.608 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:10:03.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:10:03.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-twd7s" to be "success or failure" Mar 15 23:10:03.418: INFO: Pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 56.795814ms Mar 15 23:10:05.422: INFO: Pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060291929s Mar 15 23:10:07.562: INFO: Pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200396552s Mar 15 23:10:09.574: INFO: Pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212702322s STEP: Saw pod success Mar 15 23:10:09.574: INFO: Pod "downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:10:09.578: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:10:09.604: INFO: Waiting for pod downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:10:09.636: INFO: Pod downwardapi-volume-1a1b12aa-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:10:09.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-twd7s" for this suite. Mar 15 23:10:15.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:10:15.830: INFO: namespace: e2e-tests-downward-api-twd7s, resource: bindings, ignored listing per whitelist Mar 15 23:10:15.877: INFO: namespace e2e-tests-downward-api-twd7s deletion completed in 6.238041387s • [SLOW TEST:12.625 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:10:15.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-2rb8 STEP: Creating a pod to test atomic-volume-subpath Mar 15 23:10:16.156: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2rb8" in namespace "e2e-tests-subpath-6ccwc" to be "success or failure" Mar 15 23:10:16.262: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.023017ms Mar 15 23:10:18.266: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10968564s Mar 15 23:10:20.270: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113807309s Mar 15 23:10:22.274: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=true. Elapsed: 6.117624969s Mar 15 23:10:24.279: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 8.122249086s Mar 15 23:10:26.283: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 10.126808779s Mar 15 23:10:28.287: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 12.130660339s Mar 15 23:10:30.291: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 14.134572006s Mar 15 23:10:32.295: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 16.139010745s Mar 15 23:10:34.299: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 18.142964048s Mar 15 23:10:36.303: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 20.146314694s Mar 15 23:10:38.306: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 22.149944055s Mar 15 23:10:40.310: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Running", Reason="", readiness=false. Elapsed: 24.153608393s Mar 15 23:10:42.314: INFO: Pod "pod-subpath-test-projected-2rb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.157879333s STEP: Saw pod success Mar 15 23:10:42.314: INFO: Pod "pod-subpath-test-projected-2rb8" satisfied condition "success or failure" Mar 15 23:10:42.317: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-2rb8 container test-container-subpath-projected-2rb8: STEP: delete the pod Mar 15 23:10:42.811: INFO: Waiting for pod pod-subpath-test-projected-2rb8 to disappear Mar 15 23:10:43.126: INFO: Pod pod-subpath-test-projected-2rb8 no longer exists STEP: Deleting pod pod-subpath-test-projected-2rb8 Mar 15 23:10:43.126: INFO: Deleting pod "pod-subpath-test-projected-2rb8" in namespace "e2e-tests-subpath-6ccwc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:10:43.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6ccwc" for this suite. Mar 15 23:10:49.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:10:50.032: INFO: namespace: e2e-tests-subpath-6ccwc, resource: bindings, ignored listing per whitelist Mar 15 23:10:50.176: INFO: namespace e2e-tests-subpath-6ccwc deletion completed in 6.746938942s • [SLOW TEST:34.299 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:10:50.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 15 23:10:50.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wsj5p' Mar 15 23:10:50.742: INFO: stderr: "" Mar 15 23:10:50.742: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 15 23:10:51.745: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:51.745: INFO: Found 0 / 1 Mar 15 23:10:52.746: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:52.747: INFO: Found 0 / 1 Mar 15 23:10:54.089: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:54.089: INFO: Found 0 / 1 Mar 15 23:10:54.747: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:54.747: INFO: Found 0 / 1 Mar 15 23:10:55.745: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:55.745: INFO: Found 0 / 1 Mar 15 23:10:56.745: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:56.745: INFO: Found 1 / 1 Mar 15 23:10:56.745: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 15 23:10:56.747: INFO: Selector matched 1 pods for map[app:redis] Mar 15 23:10:56.747: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 15 23:10:56.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p' Mar 15 23:10:56.854: INFO: stderr: "" Mar 15 23:10:56.854: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Mar 23:10:55.810 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Mar 23:10:55.810 # Server started, Redis version 3.2.12\n1:M 15 Mar 23:10:55.810 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Mar 23:10:55.810 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 15 23:10:56.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p --tail=1' Mar 15 23:10:56.943: INFO: stderr: "" Mar 15 23:10:56.943: INFO: stdout: "1:M 15 Mar 23:10:55.810 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 15 23:10:56.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p --limit-bytes=1' Mar 15 23:10:57.041: INFO: stderr: "" Mar 15 23:10:57.041: INFO: stdout: " " STEP: exposing timestamps Mar 15 23:10:57.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p --tail=1 --timestamps' Mar 15 23:10:57.128: INFO: stderr: "" Mar 15 23:10:57.128: INFO: stdout: "2020-03-15T23:10:55.917726239Z 1:M 15 Mar 23:10:55.810 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 15 23:10:59.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p --since=1s' Mar 15 23:10:59.737: INFO: stderr: "" Mar 15 23:10:59.737: INFO: stdout: "" Mar 15 23:10:59.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6srlh redis-master --namespace=e2e-tests-kubectl-wsj5p --since=24h' Mar 15 23:10:59.855: INFO: stderr: "" Mar 15 23:10:59.855: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Mar 23:10:55.810 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Mar 23:10:55.810 # Server started, Redis version 3.2.12\n1:M 15 Mar 23:10:55.810 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Mar 23:10:55.810 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 15 23:10:59.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wsj5p' Mar 15 23:11:00.003: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:11:00.003: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 15 23:11:00.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-wsj5p' Mar 15 23:11:00.120: INFO: stderr: "No resources found.\n" Mar 15 23:11:00.120: INFO: stdout: "" Mar 15 23:11:00.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-wsj5p -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 23:11:00.217: INFO: stderr: "" Mar 15 23:11:00.217: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:11:00.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wsj5p" for this suite. Mar 15 23:11:10.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:11:10.325: INFO: namespace: e2e-tests-kubectl-wsj5p, resource: bindings, ignored listing per whitelist Mar 15 23:11:10.338: INFO: namespace e2e-tests-kubectl-wsj5p deletion completed in 10.118200123s • [SLOW TEST:20.162 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:11:10.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 23:11:11.056: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:11:21.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hnhvt" for this suite. Mar 15 23:11:27.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:11:28.042: INFO: namespace: e2e-tests-init-container-hnhvt, resource: bindings, ignored listing per whitelist Mar 15 23:11:28.093: INFO: namespace e2e-tests-init-container-hnhvt deletion completed in 6.536069594s • [SLOW TEST:17.754 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:11:28.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 15 23:11:28.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 15 23:11:28.273: INFO: stderr: "" Mar 15 23:11:28.273: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:11:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n2qjl" for this suite. Mar 15 23:11:34.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:11:34.386: INFO: namespace: e2e-tests-kubectl-n2qjl, resource: bindings, ignored listing per whitelist Mar 15 23:11:34.397: INFO: namespace e2e-tests-kubectl-n2qjl deletion completed in 6.120050764s • [SLOW TEST:6.303 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:11:34.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6dkbr [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-6dkbr STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-6dkbr STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-6dkbr STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-6dkbr Mar 15 23:11:40.897: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6dkbr, name: ss-0, uid: 511cd1b2-6712-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Mar 15 23:11:41.244: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6dkbr, name: ss-0, uid: 511cd1b2-6712-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 15 23:11:41.266: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-6dkbr, name: ss-0, uid: 511cd1b2-6712-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Mar 15 23:11:41.353: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-6dkbr STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-6dkbr STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-6dkbr and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 23:11:45.896: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6dkbr Mar 15 23:11:45.902: INFO: Scaling statefulset ss to 0 Mar 15 23:12:06.623: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:12:06.625: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:12:06.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6dkbr" for this suite. Mar 15 23:12:12.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:12:12.726: INFO: namespace: e2e-tests-statefulset-6dkbr, resource: bindings, ignored listing per whitelist Mar 15 23:12:12.742: INFO: namespace e2e-tests-statefulset-6dkbr deletion completed in 6.098105194s • [SLOW TEST:38.345 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:12:12.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:12:12.834: INFO: Waiting up to 5m0s for pod "downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-j688x" to be "success or failure" Mar 15 23:12:12.838: INFO: Pod "downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535151ms Mar 15 23:12:14.841: INFO: Pod "downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00741901s Mar 15 23:12:16.846: INFO: Pod "downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011909652s STEP: Saw pod success Mar 15 23:12:16.846: INFO: Pod "downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:12:16.848: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:12:16.917: INFO: Waiting for pod downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:12:16.919: INFO: Pod downwardapi-volume-674aceaf-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:12:16.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j688x" for this suite. Mar 15 23:12:22.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:12:22.983: INFO: namespace: e2e-tests-projected-j688x, resource: bindings, ignored listing per whitelist Mar 15 23:12:23.042: INFO: namespace e2e-tests-projected-j688x deletion completed in 6.119048227s • [SLOW TEST:10.300 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:12:23.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 15 23:12:23.182: INFO: Waiting up to 5m0s for pod "pod-6d7106ed-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-86dfx" to be "success or failure" Mar 15 23:12:23.188: INFO: Pod "pod-6d7106ed-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113176ms Mar 15 23:12:25.192: INFO: Pod "pod-6d7106ed-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010036848s Mar 15 23:12:27.195: INFO: Pod "pod-6d7106ed-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013603381s STEP: Saw pod success Mar 15 23:12:27.195: INFO: Pod "pod-6d7106ed-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:12:27.199: INFO: Trying to get logs from node hunter-worker pod pod-6d7106ed-6712-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 23:12:27.284: INFO: Waiting for pod pod-6d7106ed-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:12:27.295: INFO: Pod pod-6d7106ed-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:12:27.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-86dfx" for this suite. Mar 15 23:12:33.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:12:33.344: INFO: namespace: e2e-tests-emptydir-86dfx, resource: bindings, ignored listing per whitelist Mar 15 23:12:33.385: INFO: namespace e2e-tests-emptydir-86dfx deletion completed in 6.087146827s • [SLOW TEST:10.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:12:33.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-73a5d665-6712-11ea-811c-0242ac110013 STEP: Creating secret with name secret-projected-all-test-volume-73a5d63c-6712-11ea-811c-0242ac110013 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 15 23:12:33.645: INFO: Waiting up to 5m0s for pod "projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-df8wl" to be "success or failure" Mar 15 23:12:33.649: INFO: Pod "projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275374ms Mar 15 23:12:35.653: INFO: Pod "projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007349289s Mar 15 23:12:37.657: INFO: Pod "projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011509634s STEP: Saw pod success Mar 15 23:12:37.657: INFO: Pod "projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:12:37.660: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013 container projected-all-volume-test: STEP: delete the pod Mar 15 23:12:37.711: INFO: Waiting for pod projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:12:37.726: INFO: Pod projected-volume-73a5d5cf-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:12:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-df8wl" for this suite. Mar 15 23:12:43.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:12:43.812: INFO: namespace: e2e-tests-projected-df8wl, resource: bindings, ignored listing per whitelist Mar 15 23:12:43.824: INFO: namespace e2e-tests-projected-df8wl deletion completed in 6.094381396s • [SLOW TEST:10.438 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:12:43.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-79d2c6ca-6712-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 23:12:43.959: INFO: Waiting up to 5m0s for pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-vwm5s" to be "success or failure" Mar 15 23:12:43.979: INFO: Pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 20.564023ms Mar 15 23:12:46.259: INFO: Pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300265016s Mar 15 23:12:48.263: INFO: Pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30456982s Mar 15 23:12:50.267: INFO: Pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.308656609s STEP: Saw pod success Mar 15 23:12:50.267: INFO: Pod "pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:12:50.270: INFO: Trying to get logs from node hunter-worker pod pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 23:12:50.399: INFO: Waiting for pod pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:12:50.427: INFO: Pod pod-secrets-79d8aa8f-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:12:50.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vwm5s" for this suite. Mar 15 23:12:56.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:12:56.660: INFO: namespace: e2e-tests-secrets-vwm5s, resource: bindings, ignored listing per whitelist Mar 15 23:12:56.698: INFO: namespace e2e-tests-secrets-vwm5s deletion completed in 6.268296774s • [SLOW TEST:12.874 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:12:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 23:12:56.804: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:13:05.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-jh82c" for this suite. Mar 15 23:13:29.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:13:29.842: INFO: namespace: e2e-tests-init-container-jh82c, resource: bindings, ignored listing per whitelist Mar 15 23:13:29.888: INFO: namespace e2e-tests-init-container-jh82c deletion completed in 24.240715404s • [SLOW TEST:33.190 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:13:29.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:13:30.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-87blz" to be "success or failure" Mar 15 23:13:31.385: INFO: Pod "downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 614.388863ms Mar 15 23:13:33.389: INFO: Pod "downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618513102s Mar 15 23:13:35.394: INFO: Pod "downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.623343042s STEP: Saw pod success Mar 15 23:13:35.394: INFO: Pod "downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:13:35.397: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:13:35.426: INFO: Waiting for pod downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:13:35.440: INFO: Pod downwardapi-volume-95a57838-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:13:35.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-87blz" for this suite. Mar 15 23:13:45.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:13:45.537: INFO: namespace: e2e-tests-downward-api-87blz, resource: bindings, ignored listing per whitelist Mar 15 23:13:45.546: INFO: namespace e2e-tests-downward-api-87blz deletion completed in 10.101845432s • [SLOW TEST:15.657 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:13:45.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-9ea82120-6712-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 23:13:45.873: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-bws56" to be "success or failure" Mar 15 23:13:45.895: INFO: Pod "pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 21.728499ms Mar 15 23:13:47.898: INFO: Pod "pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02536929s Mar 15 23:13:49.903: INFO: Pod "pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029608355s STEP: Saw pod success Mar 15 23:13:49.903: INFO: Pod "pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:13:49.906: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 23:13:50.123: INFO: Waiting for pod pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013 to disappear Mar 15 23:13:50.158: INFO: Pod pod-projected-secrets-9eb45dc5-6712-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:13:50.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bws56" for this suite. Mar 15 23:13:56.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:13:56.408: INFO: namespace: e2e-tests-projected-bws56, resource: bindings, ignored listing per whitelist Mar 15 23:13:56.440: INFO: namespace e2e-tests-projected-bws56 deletion completed in 6.278935548s • [SLOW TEST:10.894 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:13:56.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-a52421d7-6712-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a52421d7-6712-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:14:02.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7nt9l" for this suite. Mar 15 23:14:24.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:14:25.654: INFO: namespace: e2e-tests-configmap-7nt9l, resource: bindings, ignored listing per whitelist Mar 15 23:14:25.657: INFO: namespace e2e-tests-configmap-7nt9l deletion completed in 22.882149854s • [SLOW TEST:29.216 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:14:25.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-b6e0366c-6712-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:14:36.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-th4j6" for this suite. Mar 15 23:15:00.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:15:00.677: INFO: namespace: e2e-tests-configmap-th4j6, resource: bindings, ignored listing per whitelist Mar 15 23:15:00.712: INFO: namespace e2e-tests-configmap-th4j6 deletion completed in 24.126631705s • [SLOW TEST:35.055 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:15:00.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 23:15:01.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-txr28' Mar 15 23:15:01.213: INFO: stderr: "" Mar 15 23:15:01.213: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 15 23:15:01.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-txr28' Mar 15 23:15:05.909: INFO: stderr: "" Mar 15 23:15:05.909: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:15:05.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-txr28" for this suite. Mar 15 23:15:12.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:15:12.237: INFO: namespace: e2e-tests-kubectl-txr28, resource: bindings, ignored listing per whitelist Mar 15 23:15:12.252: INFO: namespace e2e-tests-kubectl-txr28 deletion completed in 6.161533864s • [SLOW TEST:11.539 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:15:12.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:15:13.315: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 40.857447ms) Mar 15 23:15:13.320: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.407035ms) Mar 15 23:15:13.323: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.804331ms) Mar 15 23:15:13.326: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.945964ms) Mar 15 23:15:13.328: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.313526ms) Mar 15 23:15:13.331: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.728134ms) Mar 15 23:15:13.333: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.297476ms) Mar 15 23:15:13.335: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.239139ms) Mar 15 23:15:13.337: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.260762ms) Mar 15 23:15:13.340: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.587801ms) Mar 15 23:15:13.417: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 77.206952ms) Mar 15 23:15:13.421: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.697008ms) Mar 15 23:15:13.425: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.909284ms) Mar 15 23:15:13.428: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.19813ms) Mar 15 23:15:13.432: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.263803ms) Mar 15 23:15:13.434: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.62313ms) Mar 15 23:15:13.437: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.957059ms) Mar 15 23:15:13.440: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.922213ms) Mar 15 23:15:13.444: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.709883ms) Mar 15 23:15:13.447: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.81291ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:15:13.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-lgxnn" for this suite. Mar 15 23:15:19.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:15:19.619: INFO: namespace: e2e-tests-proxy-lgxnn, resource: bindings, ignored listing per whitelist Mar 15 23:15:19.650: INFO: namespace e2e-tests-proxy-lgxnn deletion completed in 6.200092365s • [SLOW TEST:7.398 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:15:19.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 15 23:15:19.716: INFO: PodSpec: initContainers in spec.initContainers Mar 15 23:16:08.171: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d6b02363-6712-11ea-811c-0242ac110013", GenerateName:"", Namespace:"e2e-tests-init-container-chkct", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-chkct/pods/pod-init-d6b02363-6712-11ea-811c-0242ac110013", UID:"d6b755fb-6712-11ea-99e8-0242ac110002", ResourceVersion:"46991", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719910919, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"716139587"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6pp6l", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a82bc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6pp6l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6pp6l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6pp6l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022d9d88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dbfe60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d9e10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d9e30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022d9e38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022d9e3c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719910919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719910919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719910919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719910919, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.207", StartTime:(*v1.Time)(0xc001efab00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001efab40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001893ce0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c03f11fa98e84ee97e6d36fe2a6d936c03db4bd0423f3fd6271f2ef8e544fd38"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001efab60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001efab20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:16:08.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-chkct" for this suite. Mar 15 23:16:32.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:16:32.406: INFO: namespace: e2e-tests-init-container-chkct, resource: bindings, ignored listing per whitelist Mar 15 23:16:32.472: INFO: namespace e2e-tests-init-container-chkct deletion completed in 24.126786085s • [SLOW TEST:72.822 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:16:32.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-021ffa05-6713-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:16:32.632: INFO: Waiting up to 5m0s for pod "pod-configmaps-0225709e-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-mr88k" to be "success or failure" Mar 15 23:16:32.635: INFO: Pod "pod-configmaps-0225709e-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1207ms Mar 15 23:16:34.639: INFO: Pod "pod-configmaps-0225709e-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00699655s Mar 15 23:16:36.642: INFO: Pod "pod-configmaps-0225709e-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010607424s STEP: Saw pod success Mar 15 23:16:36.642: INFO: Pod "pod-configmaps-0225709e-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:16:36.645: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0225709e-6713-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 15 23:16:36.698: INFO: Waiting for pod pod-configmaps-0225709e-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:16:36.719: INFO: Pod pod-configmaps-0225709e-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:16:36.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mr88k" for this suite. Mar 15 23:16:42.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:16:43.007: INFO: namespace: e2e-tests-configmap-mr88k, resource: bindings, ignored listing per whitelist Mar 15 23:16:43.355: INFO: namespace e2e-tests-configmap-mr88k deletion completed in 6.632915179s • [SLOW TEST:10.882 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:16:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:17:43.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jpffn" for this suite. Mar 15 23:18:05.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:18:05.867: INFO: namespace: e2e-tests-container-probe-jpffn, resource: bindings, ignored listing per whitelist Mar 15 23:18:05.874: INFO: namespace e2e-tests-container-probe-jpffn deletion completed in 22.295430519s • [SLOW TEST:82.519 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:18:05.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 15 23:18:06.011: INFO: Waiting up to 5m0s for pod "var-expansion-39cb5aff-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-var-expansion-jzmjx" to be "success or failure" Mar 15 23:18:06.021: INFO: Pod "var-expansion-39cb5aff-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.26708ms Mar 15 23:18:08.025: INFO: Pod "var-expansion-39cb5aff-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013772585s Mar 15 23:18:10.029: INFO: Pod "var-expansion-39cb5aff-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018064019s STEP: Saw pod success Mar 15 23:18:10.029: INFO: Pod "var-expansion-39cb5aff-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:18:10.032: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-39cb5aff-6713-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 15 23:18:10.066: INFO: Waiting for pod var-expansion-39cb5aff-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:18:10.074: INFO: Pod var-expansion-39cb5aff-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:18:10.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jzmjx" for this suite. Mar 15 23:18:16.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:18:16.155: INFO: namespace: e2e-tests-var-expansion-jzmjx, resource: bindings, ignored listing per whitelist Mar 15 23:18:16.161: INFO: namespace e2e-tests-var-expansion-jzmjx deletion completed in 6.084591929s • [SLOW TEST:10.287 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:18:16.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:18:42.269: INFO: Container started at 2020-03-15 23:18:18 +0000 UTC, pod became ready at 2020-03-15 23:18:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:18:42.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mcvnc" for this suite. Mar 15 23:19:04.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:19:04.369: INFO: namespace: e2e-tests-container-probe-mcvnc, resource: bindings, ignored listing per whitelist Mar 15 23:19:04.373: INFO: namespace e2e-tests-container-probe-mcvnc deletion completed in 22.100321383s • [SLOW TEST:48.211 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:19:04.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 15 23:19:11.256: INFO: Successfully updated pod "annotationupdate5cbf528b-6713-11ea-811c-0242ac110013" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:19:13.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d4lxs" for this suite. Mar 15 23:19:39.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:19:39.487: INFO: namespace: e2e-tests-projected-d4lxs, resource: bindings, ignored listing per whitelist Mar 15 23:19:39.540: INFO: namespace e2e-tests-projected-d4lxs deletion completed in 26.130530735s • [SLOW TEST:35.167 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:19:39.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 15 23:19:39.628: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 15 23:19:39.642: INFO: Waiting for terminating namespaces to be deleted... Mar 15 23:19:39.645: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 15 23:19:39.651: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.651: INFO: Container kube-proxy ready: true, restart count 0 Mar 15 23:19:39.651: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.651: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 23:19:39.651: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.651: INFO: Container coredns ready: true, restart count 0 Mar 15 23:19:39.651: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 15 23:19:39.657: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.657: INFO: Container kindnet-cni ready: true, restart count 0 Mar 15 23:19:39.657: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.657: INFO: Container coredns ready: true, restart count 0 Mar 15 23:19:39.657: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Mar 15 23:19:39.657: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fc9d88c602bf87], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:19:40.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9lwws" for this suite. Mar 15 23:19:46.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:19:46.747: INFO: namespace: e2e-tests-sched-pred-9lwws, resource: bindings, ignored listing per whitelist Mar 15 23:19:46.786: INFO: namespace e2e-tests-sched-pred-9lwws deletion completed in 6.086954908s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.246 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:19:46.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:19:51.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-z7vjz" for this suite. Mar 15 23:20:33.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:20:33.092: INFO: namespace: e2e-tests-kubelet-test-z7vjz, resource: bindings, ignored listing per whitelist Mar 15 23:20:33.104: INFO: namespace e2e-tests-kubelet-test-z7vjz deletion completed in 42.083561148s • [SLOW TEST:46.318 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:20:33.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:20:33.421: INFO: Creating deployment "nginx-deployment" Mar 15 23:20:33.447: INFO: Waiting for observed generation 1 Mar 15 23:20:35.468: INFO: Waiting for all required pods to come up Mar 15 23:20:35.472: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 15 23:20:47.986: INFO: Waiting for deployment "nginx-deployment" to complete Mar 15 23:20:48.433: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 15 23:20:49.298: INFO: Updating deployment nginx-deployment Mar 15 23:20:49.298: INFO: Waiting for observed generation 2 Mar 15 23:20:52.458: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 15 23:20:53.561: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 15 23:20:55.183: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 15 23:20:58.027: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 15 23:20:58.027: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 15 23:20:58.782: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 15 23:20:59.972: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 15 23:20:59.972: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 15 23:20:59.978: INFO: Updating deployment nginx-deployment Mar 15 23:20:59.978: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 15 23:21:00.618: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 15 23:21:02.918: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 23:21:03.237: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zdfwk/deployments/nginx-deployment,UID:91abee3c-6713-11ea-99e8-0242ac110002,ResourceVersion:48000,Generation:3,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-03-15 23:21:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-15 23:21:02 +0000 UTC 2020-03-15 23:20:33 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 15 23:21:03.434: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zdfwk/replicasets/nginx-deployment-5c98f8fb5,UID:9b229c03-6713-11ea-99e8-0242ac110002,ResourceVersion:47997,Generation:3,CreationTimestamp:2020-03-15 23:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 91abee3c-6713-11ea-99e8-0242ac110002 0xc0025acab7 0xc0025acab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:21:03.434: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 15 23:21:03.434: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zdfwk/replicasets/nginx-deployment-85ddf47c5d,UID:91b2a055-6713-11ea-99e8-0242ac110002,ResourceVersion:47982,Generation:3,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 91abee3c-6713-11ea-99e8-0242ac110002 0xc0025acb77 0xc0025acb78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 15 23:21:03.652: INFO: Pod "nginx-deployment-5c98f8fb5-4mtm6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4mtm6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-4mtm6,UID:a1e213ec-6713-11ea-99e8-0242ac110002,ResourceVersion:47974,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025ad4e7 0xc0025ad4e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ad560} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ad580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 23:21:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.652: INFO: Pod "nginx-deployment-5c98f8fb5-8vqc9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8vqc9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-8vqc9,UID:9c1a088d-6713-11ea-99e8-0242ac110002,ResourceVersion:47914,Generation:0,CreationTimestamp:2020-03-15 23:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025ad640 0xc0025ad641}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ad6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ad6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.218,StartTime:2020-03-15 23:20:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.652: INFO: Pod "nginx-deployment-5c98f8fb5-bjtkl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bjtkl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-bjtkl,UID:9b6d31e8-6713-11ea-99e8-0242ac110002,ResourceVersion:47904,Generation:0,CreationTimestamp:2020-03-15 23:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025ad7c0 0xc0025ad7c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ad840} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ad860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.217,StartTime:2020-03-15 23:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.652: INFO: Pod "nginx-deployment-5c98f8fb5-mpfkn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mpfkn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-mpfkn,UID:9c605fd4-6713-11ea-99e8-0242ac110002,ResourceVersion:47896,Generation:0,CreationTimestamp:2020-03-15 23:20:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025ad940 0xc0025ad941}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ad9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ad9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 23:20:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.653: INFO: Pod "nginx-deployment-5c98f8fb5-mrlvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mrlvw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-mrlvw,UID:a1fee97f-6713-11ea-99e8-0242ac110002,ResourceVersion:47972,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025adaa0 0xc0025adaa1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025adb20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025adb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.653: INFO: Pod "nginx-deployment-5c98f8fb5-p6hfj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p6hfj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-p6hfj,UID:a1feef74-6713-11ea-99e8-0242ac110002,ResourceVersion:47977,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025adbc0 0xc0025adbc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025adc40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025adc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.653: INFO: Pod "nginx-deployment-5c98f8fb5-qf862" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qf862,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-qf862,UID:a21e0883-6713-11ea-99e8-0242ac110002,ResourceVersion:47984,Generation:0,CreationTimestamp:2020-03-15 23:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025adce0 0xc0025adce1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025add60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025add80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.653: INFO: Pod "nginx-deployment-5c98f8fb5-qjckb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qjckb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-qjckb,UID:a1e9f2f7-6713-11ea-99e8-0242ac110002,ResourceVersion:47991,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0025addf0 0xc0025addf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 23:21:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.653: INFO: Pod "nginx-deployment-5c98f8fb5-wpdv8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wpdv8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-wpdv8,UID:9b413322-6713-11ea-99e8-0242ac110002,ResourceVersion:47917,Generation:0,CreationTimestamp:2020-03-15 23:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0022c6100 0xc0022c6101}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c61a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.238,StartTime:2020-03-15 23:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-5c98f8fb5-x7dsw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x7dsw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-x7dsw,UID:a1fec7d0-6713-11ea-99e8-0242ac110002,ResourceVersion:47970,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0022c6280 0xc0022c6281}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6300} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-5c98f8fb5-xx875" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xx875,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-xx875,UID:a1fef40a-6713-11ea-99e8-0242ac110002,ResourceVersion:47979,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0022c6390 0xc0022c6391}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-5c98f8fb5-zqt26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zqt26,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-zqt26,UID:a1ea1afe-6713-11ea-99e8-0242ac110002,ResourceVersion:48012,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0022c64a0 0xc0022c64a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 23:21:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-5c98f8fb5-zr695" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zr695,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-5c98f8fb5-zr695,UID:9b6d41da-6713-11ea-99e8-0242ac110002,ResourceVersion:47963,Generation:0,CreationTimestamp:2020-03-15 23:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b229c03-6713-11ea-99e8-0242ac110002 0xc0022c6600 0xc0022c6601}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c66a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.239,StartTime:2020-03-15 23:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-85ddf47c5d-4wzhs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4wzhs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-4wzhs,UID:91c73ea2-6713-11ea-99e8-0242ac110002,ResourceVersion:47818,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6780 0xc0022c6781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c67f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.215,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1317f58317760d12b1981410388c33044c5dc0d0f814ebe7e472a80fed3ab97a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-85ddf47c5d-4zmdx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4zmdx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-4zmdx,UID:91bec0ab-6713-11ea-99e8-0242ac110002,ResourceVersion:47792,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c68d0 0xc0022c68d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.214,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a5df4e48312ffcf353075f1639f75d57bf317718812d945b861f3fd36b9f2197}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-85ddf47c5d-5k4nc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5k4nc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-5k4nc,UID:91c74eb9-6713-11ea-99e8-0242ac110002,ResourceVersion:47810,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6a20 0xc0022c6a21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.216,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://34dbf26fa5554c459226fe0c865cfb612d599cc98d2ea9ffd2b81f8e4918525d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.654: INFO: Pod "nginx-deployment-85ddf47c5d-66l5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-66l5l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-66l5l,UID:a1e25588-6713-11ea-99e8-0242ac110002,ResourceVersion:48004,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6b70 0xc0022c6b71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 23:21:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-6mqxs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6mqxs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-6mqxs,UID:a1e2474a-6713-11ea-99e8-0242ac110002,ResourceVersion:47942,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6cb0 0xc0022c6cb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-7fl27" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7fl27,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-7fl27,UID:a1fee036-6713-11ea-99e8-0242ac110002,ResourceVersion:47969,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6db0 0xc0022c6db1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-82vpp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-82vpp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-82vpp,UID:91bd82c4-6713-11ea-99e8-0242ac110002,ResourceVersion:47776,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c6eb0 0xc0022c6eb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.213,StartTime:2020-03-15 23:20:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d4927d8fbeef2a68b1d0a8197ed3a220d52ed10fe4a51ff90e02bf0c4127d47e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-b5d82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b5d82,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-b5d82,UID:a1ea2af9-6713-11ea-99e8-0242ac110002,ResourceVersion:47959,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7000 0xc0022c7001}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-bjqdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bjqdd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-bjqdd,UID:a1feea41-6713-11ea-99e8-0242ac110002,ResourceVersion:47975,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7100 0xc0022c7101}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-cqvkn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cqvkn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-cqvkn,UID:a1ea3819-6713-11ea-99e8-0242ac110002,ResourceVersion:47958,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7220 0xc0022c7221}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c72a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c72c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-hsdcd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hsdcd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-hsdcd,UID:a1ea24a2-6713-11ea-99e8-0242ac110002,ResourceVersion:47960,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7340 0xc0022c7341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c73b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c73d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-kn5g7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kn5g7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-kn5g7,UID:a1fee2ff-6713-11ea-99e8-0242ac110002,ResourceVersion:47973,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7450 0xc0022c7451}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7500} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-ln5rp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ln5rp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-ln5rp,UID:a1e1d26b-6713-11ea-99e8-0242ac110002,ResourceVersion:47987,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7590 0xc0022c7591}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7600} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 23:21:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.655: INFO: Pod "nginx-deployment-85ddf47c5d-lv9br" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lv9br,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-lv9br,UID:91bec3a2-6713-11ea-99e8-0242ac110002,ResourceVersion:47820,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c78d0 0xc0022c78d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.236,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3e19334a4d4198ce1c85526a581b53b9d6cb99ca6959e5e7f45ee0d4363a850c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-lxnnd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lxnnd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-lxnnd,UID:91bcf15d-6713-11ea-99e8-0242ac110002,ResourceVersion:47805,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7d40 0xc0022c7d41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.234,StartTime:2020-03-15 23:20:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5d5814b37e5332e36d6a8bed2a695ea3b332aac5f5d34d0fa3fb5315a02f43b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-qvnvj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qvnvj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-qvnvj,UID:91bd7867-6713-11ea-99e8-0242ac110002,ResourceVersion:47765,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0022c7fc0 0xc0022c7fc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002142030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002142050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.212,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://34d068ee1fc3d2a2d31651aa68bff89ae10e2e6e6f705cd83ceace6a83db9930}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-r4h86" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r4h86,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-r4h86,UID:91beb6de-6713-11ea-99e8-0242ac110002,ResourceVersion:47788,Generation:0,CreationTimestamp:2020-03-15 23:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc002142110 0xc002142111}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002142180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021421a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:20:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.233,StartTime:2020-03-15 23:20:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-15 23:20:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ae5c0c2a3ff240d20678624b0d33dd54ed7b5f11727571c736fb598c37fafa85}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-sbh92" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbh92,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-sbh92,UID:a1ea1e60-6713-11ea-99e8-0242ac110002,ResourceVersion:48017,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc002142260 0xc002142261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021422d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021422f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-15 23:21:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-sg45m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sg45m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-sg45m,UID:a1feed72-6713-11ea-99e8-0242ac110002,ResourceVersion:47976,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0021423a0 0xc0021423a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002142410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002142430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 15 23:21:03.656: INFO: Pod "nginx-deployment-85ddf47c5d-slmjm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-slmjm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zdfwk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zdfwk/pods/nginx-deployment-85ddf47c5d-slmjm,UID:a1fedf04-6713-11ea-99e8-0242ac110002,ResourceVersion:47971,Generation:0,CreationTimestamp:2020-03-15 23:21:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 91b2a055-6713-11ea-99e8-0242ac110002 0xc0021424a0 0xc0021424a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kzvfr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzvfr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kzvfr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002142510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002142530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:21:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:21:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zdfwk" for this suite. Mar 15 23:21:38.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:21:38.053: INFO: namespace: e2e-tests-deployment-zdfwk, resource: bindings, ignored listing per whitelist Mar 15 23:21:38.121: INFO: namespace e2e-tests-deployment-zdfwk deletion completed in 32.65573338s • [SLOW TEST:65.016 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:21:38.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-p4fj STEP: Creating a pod to test atomic-volume-subpath Mar 15 23:21:39.053: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p4fj" in namespace "e2e-tests-subpath-6t249" to be "success or failure" Mar 15 23:21:39.182: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Pending", Reason="", readiness=false. Elapsed: 129.073126ms Mar 15 23:21:41.518: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.464479673s Mar 15 23:21:43.523: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469754722s Mar 15 23:21:45.527: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=true. Elapsed: 6.473460376s Mar 15 23:21:47.530: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 8.476448986s Mar 15 23:21:49.649: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 10.596039635s Mar 15 23:21:51.653: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 12.599834071s Mar 15 23:21:53.658: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 14.604200844s Mar 15 23:21:55.662: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 16.608556674s Mar 15 23:21:57.666: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 18.612998433s Mar 15 23:21:59.670: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 20.616564822s Mar 15 23:22:01.674: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 22.621047465s Mar 15 23:22:03.679: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Running", Reason="", readiness=false. Elapsed: 24.626017836s Mar 15 23:22:05.683: INFO: Pod "pod-subpath-test-configmap-p4fj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.630074433s STEP: Saw pod success Mar 15 23:22:05.684: INFO: Pod "pod-subpath-test-configmap-p4fj" satisfied condition "success or failure" Mar 15 23:22:05.687: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-p4fj container test-container-subpath-configmap-p4fj: STEP: delete the pod Mar 15 23:22:05.711: INFO: Waiting for pod pod-subpath-test-configmap-p4fj to disappear Mar 15 23:22:05.721: INFO: Pod pod-subpath-test-configmap-p4fj no longer exists STEP: Deleting pod pod-subpath-test-configmap-p4fj Mar 15 23:22:05.721: INFO: Deleting pod "pod-subpath-test-configmap-p4fj" in namespace "e2e-tests-subpath-6t249" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:22:05.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6t249" for this suite. Mar 15 23:22:11.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:22:12.242: INFO: namespace: e2e-tests-subpath-6t249, resource: bindings, ignored listing per whitelist Mar 15 23:22:12.255: INFO: namespace e2e-tests-subpath-6t249 deletion completed in 6.472004667s • [SLOW TEST:34.134 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:22:12.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:22:12.432: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 15 23:22:12.519: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 15 23:22:17.717: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 15 23:22:17.717: INFO: Creating deployment "test-rolling-update-deployment" Mar 15 23:22:18.141: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 15 23:22:18.163: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 15 23:22:20.935: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 15 23:22:21.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911338, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:22:23.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911338, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:22:25.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911338, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:22:27.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911339, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911338, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:22:29.581: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 23:22:29.589: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-4kxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4kxrs/deployments/test-rolling-update-deployment,UID:cfd66ffd-6713-11ea-99e8-0242ac110002,ResourceVersion:48497,Generation:1,CreationTimestamp:2020-03-15 23:22:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-15 23:22:19 +0000 UTC 2020-03-15 23:22:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-15 23:22:27 +0000 UTC 2020-03-15 23:22:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 15 23:22:29.592: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-4kxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4kxrs/replicasets/test-rolling-update-deployment-75db98fb4c,UID:d01ee5a0-6713-11ea-99e8-0242ac110002,ResourceVersion:48488,Generation:1,CreationTimestamp:2020-03-15 23:22:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cfd66ffd-6713-11ea-99e8-0242ac110002 0xc0021c2747 0xc0021c2748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 15 23:22:29.592: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 15 23:22:29.592: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-4kxrs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4kxrs/replicasets/test-rolling-update-controller,UID:ccafca76-6713-11ea-99e8-0242ac110002,ResourceVersion:48496,Generation:2,CreationTimestamp:2020-03-15 23:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cfd66ffd-6713-11ea-99e8-0242ac110002 0xc0021c2687 0xc0021c2688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:22:29.595: INFO: Pod "test-rolling-update-deployment-75db98fb4c-dj8dh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-dj8dh,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-4kxrs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-4kxrs/pods/test-rolling-update-deployment-75db98fb4c-dj8dh,UID:d033361d-6713-11ea-99e8-0242ac110002,ResourceVersion:48487,Generation:0,CreationTimestamp:2020-03-15 23:22:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c d01ee5a0-6713-11ea-99e8-0242ac110002 0xc0021fe9f7 0xc0021fe9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bbnmh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bbnmh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bbnmh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021fea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021fea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:22:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:22:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:22:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:22:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.229,StartTime:2020-03-15 23:22:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-15 23:22:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8c9af23a3502d2a1a4969ddced4bc0707f892021403f2b896ef9592701169e22}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:22:29.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-4kxrs" for this suite. Mar 15 23:22:35.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:22:35.654: INFO: namespace: e2e-tests-deployment-4kxrs, resource: bindings, ignored listing per whitelist Mar 15 23:22:35.705: INFO: namespace e2e-tests-deployment-4kxrs deletion completed in 6.107922106s • [SLOW TEST:23.450 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:22:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0315 23:22:37.299141 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 23:22:37.299: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:22:37.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-24tc7" for this suite. Mar 15 23:22:43.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:22:44.076: INFO: namespace: e2e-tests-gc-24tc7, resource: bindings, ignored listing per whitelist Mar 15 23:22:44.086: INFO: namespace e2e-tests-gc-24tc7 deletion completed in 6.78356275s • [SLOW TEST:8.380 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:22:44.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:22:44.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-pvcxb" to be "success or failure" Mar 15 23:22:44.519: INFO: Pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 100.070703ms Mar 15 23:22:47.108: INFO: Pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68929608s Mar 15 23:22:49.116: INFO: Pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697313284s Mar 15 23:22:51.120: INFO: Pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.701218818s STEP: Saw pod success Mar 15 23:22:51.120: INFO: Pod "downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:22:51.123: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:22:51.141: INFO: Waiting for pod downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:22:51.151: INFO: Pod downwardapi-volume-dfb828bc-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:22:51.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pvcxb" for this suite. Mar 15 23:22:57.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:22:57.227: INFO: namespace: e2e-tests-downward-api-pvcxb, resource: bindings, ignored listing per whitelist Mar 15 23:22:57.261: INFO: namespace e2e-tests-downward-api-pvcxb deletion completed in 6.106857222s • [SLOW TEST:13.175 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:22:57.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e782b7b4-6713-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:22:57.450: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-qfvx8" to be "success or failure" Mar 15 23:22:57.468: INFO: Pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 17.863271ms Mar 15 23:22:59.472: INFO: Pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021446976s Mar 15 23:23:01.513: INFO: Pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.062494531s Mar 15 23:23:03.573: INFO: Pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122685244s STEP: Saw pod success Mar 15 23:23:03.573: INFO: Pod "pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:23:03.575: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 23:23:03.653: INFO: Waiting for pod pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:23:03.776: INFO: Pod pod-projected-configmaps-e7839595-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:23:03.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qfvx8" for this suite. Mar 15 23:23:09.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:23:09.809: INFO: namespace: e2e-tests-projected-qfvx8, resource: bindings, ignored listing per whitelist Mar 15 23:23:09.899: INFO: namespace e2e-tests-projected-qfvx8 deletion completed in 6.118831169s • [SLOW TEST:12.638 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:23:09.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:23:10.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-hc4mb" to be "success or failure" Mar 15 23:23:10.129: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 32.919958ms Mar 15 23:23:12.196: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099631165s Mar 15 23:23:14.200: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104176509s Mar 15 23:23:16.224: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128135886s Mar 15 23:23:18.357: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261260051s STEP: Saw pod success Mar 15 23:23:18.357: INFO: Pod "downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:23:18.360: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:23:18.393: INFO: Waiting for pod downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:23:18.410: INFO: Pod downwardapi-volume-ef0d0df5-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:23:18.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hc4mb" for this suite. Mar 15 23:23:24.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:23:24.573: INFO: namespace: e2e-tests-projected-hc4mb, resource: bindings, ignored listing per whitelist Mar 15 23:23:24.581: INFO: namespace e2e-tests-projected-hc4mb deletion completed in 6.162924159s • [SLOW TEST:14.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:23:24.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f7df2ee9-6713-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:23:24.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-l6zvg" to be "success or failure" Mar 15 23:23:25.225: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 304.56899ms Mar 15 23:23:27.229: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308029318s Mar 15 23:23:29.232: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31113029s Mar 15 23:23:31.236: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 6.315740464s Mar 15 23:23:33.241: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.320122129s STEP: Saw pod success Mar 15 23:23:33.241: INFO: Pod "pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:23:33.244: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 15 23:23:33.275: INFO: Waiting for pod pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013 to disappear Mar 15 23:23:33.287: INFO: Pod pod-configmaps-f7df8b7d-6713-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:23:33.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l6zvg" for this suite. Mar 15 23:23:39.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:23:39.340: INFO: namespace: e2e-tests-configmap-l6zvg, resource: bindings, ignored listing per whitelist Mar 15 23:23:39.384: INFO: namespace e2e-tests-configmap-l6zvg deletion completed in 6.093535172s • [SLOW TEST:14.803 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:23:39.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0315 23:24:11.784654 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 23:24:11.784: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:24:11.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-24cqh" for this suite. Mar 15 23:24:19.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:24:19.912: INFO: namespace: e2e-tests-gc-24cqh, resource: bindings, ignored listing per whitelist Mar 15 23:24:19.936: INFO: namespace e2e-tests-gc-24cqh deletion completed in 8.149809377s • [SLOW TEST:40.551 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:24:19.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 15 23:24:24.668: INFO: Successfully updated pod "pod-update-activedeadlineseconds-18ce1b54-6714-11ea-811c-0242ac110013" Mar 15 23:24:24.668: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-18ce1b54-6714-11ea-811c-0242ac110013" in namespace "e2e-tests-pods-prm66" to be "terminated due to deadline exceeded" Mar 15 23:24:24.772: INFO: Pod "pod-update-activedeadlineseconds-18ce1b54-6714-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 104.012421ms Mar 15 23:24:26.819: INFO: Pod "pod-update-activedeadlineseconds-18ce1b54-6714-11ea-811c-0242ac110013": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.151355044s Mar 15 23:24:26.819: INFO: Pod "pod-update-activedeadlineseconds-18ce1b54-6714-11ea-811c-0242ac110013" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:24:26.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-prm66" for this suite. Mar 15 23:24:32.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:24:32.937: INFO: namespace: e2e-tests-pods-prm66, resource: bindings, ignored listing per whitelist Mar 15 23:24:32.940: INFO: namespace e2e-tests-pods-prm66 deletion completed in 6.116799066s • [SLOW TEST:13.004 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:24:32.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-k8gjm [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 15 23:24:33.066: INFO: Found 0 stateful pods, waiting for 3 Mar 15 23:24:43.070: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:24:43.071: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:24:43.071: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 15 23:24:53.076: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:24:53.076: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:24:53.076: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 15 23:24:53.226: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 15 23:25:03.558: INFO: Updating stateful set ss2 Mar 15 23:25:03.564: INFO: Waiting for Pod e2e-tests-statefulset-k8gjm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 15 23:25:13.658: INFO: Found 2 stateful pods, waiting for 3 Mar 15 23:25:23.663: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:25:23.663: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:25:23.663: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 15 23:25:23.688: INFO: Updating stateful set ss2 Mar 15 23:25:23.697: INFO: Waiting for Pod e2e-tests-statefulset-k8gjm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 15 23:25:33.722: INFO: Updating stateful set ss2 Mar 15 23:25:33.732: INFO: Waiting for StatefulSet e2e-tests-statefulset-k8gjm/ss2 to complete update Mar 15 23:25:33.732: INFO: Waiting for Pod e2e-tests-statefulset-k8gjm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 15 23:25:43.740: INFO: Deleting all statefulset in ns e2e-tests-statefulset-k8gjm Mar 15 23:25:43.743: INFO: Scaling statefulset ss2 to 0 Mar 15 23:26:03.764: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:26:03.767: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:26:03.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-k8gjm" for this suite. Mar 15 23:26:11.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:26:11.861: INFO: namespace: e2e-tests-statefulset-k8gjm, resource: bindings, ignored listing per whitelist Mar 15 23:26:11.923: INFO: namespace e2e-tests-statefulset-k8gjm deletion completed in 8.109664915s • [SLOW TEST:98.982 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:26:11.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-ts4z STEP: Creating a pod to test atomic-volume-subpath Mar 15 23:26:12.175: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ts4z" in namespace "e2e-tests-subpath-6bnjc" to be "success or failure" Mar 15 23:26:12.191: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Pending", Reason="", readiness=false. Elapsed: 15.469876ms Mar 15 23:26:14.195: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019160462s Mar 15 23:26:16.199: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02338309s Mar 15 23:26:18.203: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 6.027323341s Mar 15 23:26:20.211: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 8.03554951s Mar 15 23:26:22.215: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 10.040027227s Mar 15 23:26:24.219: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 12.043867018s Mar 15 23:26:26.224: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 14.048190659s Mar 15 23:26:28.228: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 16.05276664s Mar 15 23:26:30.240: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 18.064680424s Mar 15 23:26:32.244: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 20.068820901s Mar 15 23:26:34.249: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 22.073569037s Mar 15 23:26:36.253: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Running", Reason="", readiness=false. Elapsed: 24.077902036s Mar 15 23:26:38.258: INFO: Pod "pod-subpath-test-configmap-ts4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.082072437s STEP: Saw pod success Mar 15 23:26:38.258: INFO: Pod "pod-subpath-test-configmap-ts4z" satisfied condition "success or failure" Mar 15 23:26:38.260: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-ts4z container test-container-subpath-configmap-ts4z: STEP: delete the pod Mar 15 23:26:38.308: INFO: Waiting for pod pod-subpath-test-configmap-ts4z to disappear Mar 15 23:26:38.366: INFO: Pod pod-subpath-test-configmap-ts4z no longer exists STEP: Deleting pod pod-subpath-test-configmap-ts4z Mar 15 23:26:38.366: INFO: Deleting pod "pod-subpath-test-configmap-ts4z" in namespace "e2e-tests-subpath-6bnjc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:26:38.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-6bnjc" for this suite. Mar 15 23:26:44.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:26:44.411: INFO: namespace: e2e-tests-subpath-6bnjc, resource: bindings, ignored listing per whitelist Mar 15 23:26:44.456: INFO: namespace e2e-tests-subpath-6bnjc deletion completed in 6.084738267s • [SLOW TEST:32.533 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:26:44.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-897qh STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 23:26:44.548: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 23:27:10.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.240:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-897qh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:27:10.687: INFO: >>> kubeConfig: /root/.kube/config I0315 23:27:10.718629 6 log.go:172] (0xc000dc3340) (0xc001bf3c20) Create stream I0315 23:27:10.718671 6 log.go:172] (0xc000dc3340) (0xc001bf3c20) Stream added, broadcasting: 1 I0315 23:27:10.721569 6 log.go:172] (0xc000dc3340) Reply frame received for 1 I0315 23:27:10.721616 6 log.go:172] (0xc000dc3340) (0xc001c363c0) Create stream I0315 23:27:10.721630 6 log.go:172] (0xc000dc3340) (0xc001c363c0) Stream added, broadcasting: 3 I0315 23:27:10.722535 6 log.go:172] (0xc000dc3340) Reply frame received for 3 I0315 23:27:10.722569 6 log.go:172] (0xc000dc3340) (0xc001bf3cc0) Create stream I0315 23:27:10.722582 6 log.go:172] (0xc000dc3340) (0xc001bf3cc0) Stream added, broadcasting: 5 I0315 23:27:10.723485 6 log.go:172] (0xc000dc3340) Reply frame received for 5 I0315 23:27:10.813380 6 log.go:172] (0xc000dc3340) Data frame received for 3 I0315 23:27:10.813426 6 log.go:172] (0xc001c363c0) (3) Data frame handling I0315 23:27:10.813458 6 log.go:172] (0xc001c363c0) (3) Data frame sent I0315 23:27:10.813528 6 log.go:172] (0xc000dc3340) Data frame received for 5 I0315 23:27:10.813561 6 log.go:172] (0xc001bf3cc0) (5) Data frame handling I0315 23:27:10.813619 6 log.go:172] (0xc000dc3340) Data frame received for 3 I0315 23:27:10.813641 6 log.go:172] (0xc001c363c0) (3) Data frame handling I0315 23:27:10.815831 6 log.go:172] (0xc000dc3340) Data frame received for 1 I0315 23:27:10.815855 6 log.go:172] (0xc001bf3c20) (1) Data frame handling I0315 23:27:10.815870 6 log.go:172] (0xc001bf3c20) (1) Data frame sent I0315 23:27:10.815895 6 log.go:172] (0xc000dc3340) (0xc001bf3c20) Stream removed, broadcasting: 1 I0315 23:27:10.816030 6 log.go:172] (0xc000dc3340) (0xc001bf3c20) Stream removed, broadcasting: 1 I0315 23:27:10.816057 6 log.go:172] (0xc000dc3340) Go away received I0315 23:27:10.816118 6 log.go:172] (0xc000dc3340) (0xc001c363c0) Stream removed, broadcasting: 3 I0315 23:27:10.816159 6 log.go:172] (0xc000dc3340) (0xc001bf3cc0) Stream removed, broadcasting: 5 Mar 15 23:27:10.816: INFO: Found all expected endpoints: [netserver-0] Mar 15 23:27:10.820: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.9:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-897qh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:27:10.820: INFO: >>> kubeConfig: /root/.kube/config I0315 23:27:10.856441 6 log.go:172] (0xc000f46420) (0xc00241ebe0) Create stream I0315 23:27:10.856463 6 log.go:172] (0xc000f46420) (0xc00241ebe0) Stream added, broadcasting: 1 I0315 23:27:10.858480 6 log.go:172] (0xc000f46420) Reply frame received for 1 I0315 23:27:10.858544 6 log.go:172] (0xc000f46420) (0xc001c36500) Create stream I0315 23:27:10.858572 6 log.go:172] (0xc000f46420) (0xc001c36500) Stream added, broadcasting: 3 I0315 23:27:10.859505 6 log.go:172] (0xc000f46420) Reply frame received for 3 I0315 23:27:10.859530 6 log.go:172] (0xc000f46420) (0xc00241ec80) Create stream I0315 23:27:10.859539 6 log.go:172] (0xc000f46420) (0xc00241ec80) Stream added, broadcasting: 5 I0315 23:27:10.860374 6 log.go:172] (0xc000f46420) Reply frame received for 5 I0315 23:27:10.923514 6 log.go:172] (0xc000f46420) Data frame received for 3 I0315 23:27:10.923552 6 log.go:172] (0xc001c36500) (3) Data frame handling I0315 23:27:10.923579 6 log.go:172] (0xc001c36500) (3) Data frame sent I0315 23:27:10.923593 6 log.go:172] (0xc000f46420) Data frame received for 3 I0315 23:27:10.923606 6 log.go:172] (0xc001c36500) (3) Data frame handling I0315 23:27:10.923841 6 log.go:172] (0xc000f46420) Data frame received for 5 I0315 23:27:10.923864 6 log.go:172] (0xc00241ec80) (5) Data frame handling I0315 23:27:10.925697 6 log.go:172] (0xc000f46420) Data frame received for 1 I0315 23:27:10.925735 6 log.go:172] (0xc00241ebe0) (1) Data frame handling I0315 23:27:10.925769 6 log.go:172] (0xc00241ebe0) (1) Data frame sent I0315 23:27:10.925798 6 log.go:172] (0xc000f46420) (0xc00241ebe0) Stream removed, broadcasting: 1 I0315 23:27:10.925930 6 log.go:172] (0xc000f46420) (0xc00241ebe0) Stream removed, broadcasting: 1 I0315 23:27:10.925967 6 log.go:172] (0xc000f46420) (0xc001c36500) Stream removed, broadcasting: 3 I0315 23:27:10.926015 6 log.go:172] (0xc000f46420) (0xc00241ec80) Stream removed, broadcasting: 5 Mar 15 23:27:10.926: INFO: Found all expected endpoints: [netserver-1] I0315 23:27:10.926126 6 log.go:172] (0xc000f46420) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:27:10.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-897qh" for this suite. Mar 15 23:27:32.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:27:33.015: INFO: namespace: e2e-tests-pod-network-test-897qh, resource: bindings, ignored listing per whitelist Mar 15 23:27:33.019: INFO: namespace e2e-tests-pod-network-test-897qh deletion completed in 22.08917318s • [SLOW TEST:48.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:27:33.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 15 23:27:42.361: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:27:43.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-rrsfn" for this suite. Mar 15 23:28:06.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:28:06.115: INFO: namespace: e2e-tests-replicaset-rrsfn, resource: bindings, ignored listing per whitelist Mar 15 23:28:06.117: INFO: namespace e2e-tests-replicaset-rrsfn deletion completed in 22.527932705s • [SLOW TEST:33.098 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:28:06.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 15 23:28:06.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:08.362: INFO: stderr: "" Mar 15 23:28:08.362: INFO: stdout: "pod/pause created\n" Mar 15 23:28:08.362: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 15 23:28:08.362: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8s5n9" to be "running and ready" Mar 15 23:28:08.368: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091776ms Mar 15 23:28:10.403: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041465262s Mar 15 23:28:12.407: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.045186678s Mar 15 23:28:12.407: INFO: Pod "pause" satisfied condition "running and ready" Mar 15 23:28:12.407: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 15 23:28:12.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:12.515: INFO: stderr: "" Mar 15 23:28:12.515: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 15 23:28:12.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:12.624: INFO: stderr: "" Mar 15 23:28:12.624: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 15 23:28:12.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:12.720: INFO: stderr: "" Mar 15 23:28:12.720: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 15 23:28:12.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:12.811: INFO: stderr: "" Mar 15 23:28:12.811: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 15 23:28:12.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:12.915: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:28:12.915: INFO: stdout: "pod \"pause\" force deleted\n" Mar 15 23:28:12.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8s5n9' Mar 15 23:28:13.022: INFO: stderr: "No resources found.\n" Mar 15 23:28:13.022: INFO: stdout: "" Mar 15 23:28:13.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8s5n9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 23:28:13.129: INFO: stderr: "" Mar 15 23:28:13.129: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:28:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8s5n9" for this suite. Mar 15 23:28:19.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:28:19.432: INFO: namespace: e2e-tests-kubectl-8s5n9, resource: bindings, ignored listing per whitelist Mar 15 23:28:19.432: INFO: namespace e2e-tests-kubectl-8s5n9 deletion completed in 6.299893687s • [SLOW TEST:13.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:28:19.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ddxjc.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ddxjc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ddxjc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ddxjc.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ddxjc.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ddxjc.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 15 23:28:25.647: INFO: DNS probes using e2e-tests-dns-ddxjc/dns-test-a77d78f6-6714-11ea-811c-0242ac110013 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:28:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-ddxjc" for this suite. Mar 15 23:28:31.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:28:31.749: INFO: namespace: e2e-tests-dns-ddxjc, resource: bindings, ignored listing per whitelist Mar 15 23:28:31.812: INFO: namespace e2e-tests-dns-ddxjc deletion completed in 6.104881094s • [SLOW TEST:12.380 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:28:31.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-xqx2k Mar 15 23:28:38.057: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-xqx2k STEP: checking the pod's current state and verifying that restartCount is present Mar 15 23:28:38.061: INFO: Initial restart count of pod liveness-exec is 0 Mar 15 23:29:26.163: INFO: Restart count of pod e2e-tests-container-probe-xqx2k/liveness-exec is now 1 (48.101893738s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:29:26.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xqx2k" for this suite. Mar 15 23:29:32.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:29:32.307: INFO: namespace: e2e-tests-container-probe-xqx2k, resource: bindings, ignored listing per whitelist Mar 15 23:29:32.309: INFO: namespace e2e-tests-container-probe-xqx2k deletion completed in 6.083057316s • [SLOW TEST:60.497 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:29:32.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 15 23:29:32.412: INFO: Waiting up to 5m0s for pod "var-expansion-d2ed033b-6714-11ea-811c-0242ac110013" in namespace "e2e-tests-var-expansion-k4qbt" to be "success or failure" Mar 15 23:29:32.423: INFO: Pod "var-expansion-d2ed033b-6714-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.248603ms Mar 15 23:29:34.428: INFO: Pod "var-expansion-d2ed033b-6714-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015294722s Mar 15 23:29:36.432: INFO: Pod "var-expansion-d2ed033b-6714-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019332025s STEP: Saw pod success Mar 15 23:29:36.432: INFO: Pod "var-expansion-d2ed033b-6714-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:29:36.435: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-d2ed033b-6714-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 15 23:29:36.468: INFO: Waiting for pod var-expansion-d2ed033b-6714-11ea-811c-0242ac110013 to disappear Mar 15 23:29:36.476: INFO: Pod var-expansion-d2ed033b-6714-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:29:36.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-k4qbt" for this suite. Mar 15 23:29:42.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:29:42.562: INFO: namespace: e2e-tests-var-expansion-k4qbt, resource: bindings, ignored listing per whitelist Mar 15 23:29:42.579: INFO: namespace e2e-tests-var-expansion-k4qbt deletion completed in 6.099801932s • [SLOW TEST:10.269 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:29:42.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 15 23:29:42.705: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 15 23:29:47.710: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:29:48.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-4qrqh" for this suite. Mar 15 23:29:56.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:29:57.017: INFO: namespace: e2e-tests-replication-controller-4qrqh, resource: bindings, ignored listing per whitelist Mar 15 23:29:57.055: INFO: namespace e2e-tests-replication-controller-4qrqh deletion completed in 8.162351789s • [SLOW TEST:14.476 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:29:57.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:29:57.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-pm28v" to be "success or failure" Mar 15 23:29:57.154: INFO: Pod "downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.870435ms Mar 15 23:29:59.202: INFO: Pod "downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05228785s Mar 15 23:30:01.206: INFO: Pod "downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056172175s STEP: Saw pod success Mar 15 23:30:01.206: INFO: Pod "downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:30:01.208: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:30:01.237: INFO: Waiting for pod downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013 to disappear Mar 15 23:30:01.249: INFO: Pod downwardapi-volume-e1ad357d-6714-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:30:01.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pm28v" for this suite. Mar 15 23:30:07.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:30:07.373: INFO: namespace: e2e-tests-downward-api-pm28v, resource: bindings, ignored listing per whitelist Mar 15 23:30:07.387: INFO: namespace e2e-tests-downward-api-pm28v deletion completed in 6.133772361s • [SLOW TEST:10.331 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:30:07.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 15 23:30:14.341: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-e83d3ffc-6714-11ea-811c-0242ac110013", GenerateName:"", Namespace:"e2e-tests-pods-ndzjm", SelfLink:"/api/v1/namespaces/e2e-tests-pods-ndzjm/pods/pod-submit-remove-e83d3ffc-6714-11ea-811c-0242ac110013", UID:"e842d1e7-6714-11ea-99e8-0242ac110002", ResourceVersion:"50244", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719911808, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"155653723"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wn54r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002707940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wn54r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002797928), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020b1500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002797970)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002797990)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002797998), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00279799c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911808, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911812, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911812, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719911808, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.246", StartTime:(*v1.Time)(0xc001e9fe80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e9fea0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://74ff96630617d3a77bdf891712fd36da02f4f155b5cd8042ed82b6625c37afae"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 15 23:30:19.355: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:30:19.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ndzjm" for this suite. Mar 15 23:30:25.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:30:25.419: INFO: namespace: e2e-tests-pods-ndzjm, resource: bindings, ignored listing per whitelist Mar 15 23:30:25.451: INFO: namespace e2e-tests-pods-ndzjm deletion completed in 6.089748902s • [SLOW TEST:18.064 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:30:25.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:30:29.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h45d7" for this suite. Mar 15 23:31:13.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:31:13.761: INFO: namespace: e2e-tests-kubelet-test-h45d7, resource: bindings, ignored listing per whitelist Mar 15 23:31:13.806: INFO: namespace e2e-tests-kubelet-test-h45d7 deletion completed in 44.092344401s • [SLOW TEST:48.355 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:31:13.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-d5xmn Mar 15 23:31:17.948: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-d5xmn STEP: checking the pod's current state and verifying that restartCount is present Mar 15 23:31:17.951: INFO: Initial restart count of pod liveness-http is 0 Mar 15 23:31:38.032: INFO: Restart count of pod e2e-tests-container-probe-d5xmn/liveness-http is now 1 (20.081313784s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:31:38.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-d5xmn" for this suite. Mar 15 23:31:44.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:31:44.242: INFO: namespace: e2e-tests-container-probe-d5xmn, resource: bindings, ignored listing per whitelist Mar 15 23:31:44.322: INFO: namespace e2e-tests-container-probe-d5xmn deletion completed in 6.266200783s • [SLOW TEST:30.516 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:31:44.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:31:44.709: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"21b7ff98-6715-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002359b62), BlockOwnerDeletion:(*bool)(0xc002359b63)}} Mar 15 23:31:44.826: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"21b33f93-6715-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0022c716e), BlockOwnerDeletion:(*bool)(0xc0022c716f)}} Mar 15 23:31:44.830: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"21b3efff-6715-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002359f2e), BlockOwnerDeletion:(*bool)(0xc002359f2f)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:31:50.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7ht4v" for this suite. Mar 15 23:31:56.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:31:56.089: INFO: namespace: e2e-tests-gc-7ht4v, resource: bindings, ignored listing per whitelist Mar 15 23:31:56.143: INFO: namespace e2e-tests-gc-7ht4v deletion completed in 6.105689445s • [SLOW TEST:11.821 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:31:56.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 15 23:32:01.179: INFO: Successfully updated pod "pod-update-28d93e62-6715-11ea-811c-0242ac110013" STEP: verifying the updated pod is in kubernetes Mar 15 23:32:01.188: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:32:01.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vrnt9" for this suite. Mar 15 23:32:25.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:32:25.303: INFO: namespace: e2e-tests-pods-vrnt9, resource: bindings, ignored listing per whitelist Mar 15 23:32:25.313: INFO: namespace e2e-tests-pods-vrnt9 deletion completed in 24.121897205s • [SLOW TEST:29.170 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:32:25.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-3a0a2a28-6715-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:32:25.426: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-bdqxk" to be "success or failure" Mar 15 23:32:25.440: INFO: Pod "pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 13.991182ms Mar 15 23:32:27.444: INFO: Pod "pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017695725s Mar 15 23:32:29.448: INFO: Pod "pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022169777s STEP: Saw pod success Mar 15 23:32:29.448: INFO: Pod "pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:32:29.451: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 23:32:29.488: INFO: Waiting for pod pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013 to disappear Mar 15 23:32:29.514: INFO: Pod pod-projected-configmaps-3a0accf7-6715-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:32:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bdqxk" for this suite. Mar 15 23:32:35.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:32:35.616: INFO: namespace: e2e-tests-projected-bdqxk, resource: bindings, ignored listing per whitelist Mar 15 23:32:35.626: INFO: namespace e2e-tests-projected-bdqxk deletion completed in 6.108347967s • [SLOW TEST:10.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:32:35.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 15 23:32:35.721: INFO: Waiting up to 5m0s for pod "downward-api-403111e2-6715-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-k4dgq" to be "success or failure" Mar 15 23:32:35.743: INFO: Pod "downward-api-403111e2-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 22.380783ms Mar 15 23:32:37.747: INFO: Pod "downward-api-403111e2-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025866987s Mar 15 23:32:39.752: INFO: Pod "downward-api-403111e2-6715-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031082362s STEP: Saw pod success Mar 15 23:32:39.752: INFO: Pod "downward-api-403111e2-6715-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:32:39.756: INFO: Trying to get logs from node hunter-worker pod downward-api-403111e2-6715-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 15 23:32:39.774: INFO: Waiting for pod downward-api-403111e2-6715-11ea-811c-0242ac110013 to disappear Mar 15 23:32:39.798: INFO: Pod downward-api-403111e2-6715-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:32:39.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k4dgq" for this suite. Mar 15 23:32:45.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:32:45.880: INFO: namespace: e2e-tests-downward-api-k4dgq, resource: bindings, ignored listing per whitelist Mar 15 23:32:45.893: INFO: namespace e2e-tests-downward-api-k4dgq deletion completed in 6.092335888s • [SLOW TEST:10.267 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:32:45.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:32:46.583: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 15 23:32:46.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7rkz4/daemonsets","resourceVersion":"50738"},"items":null} Mar 15 23:32:46.591: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7rkz4/pods","resourceVersion":"50738"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:32:46.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-7rkz4" for this suite. Mar 15 23:32:52.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:32:52.741: INFO: namespace: e2e-tests-daemonsets-7rkz4, resource: bindings, ignored listing per whitelist Mar 15 23:32:52.778: INFO: namespace e2e-tests-daemonsets-7rkz4 deletion completed in 6.179604457s S [SKIPPING] [6.885 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:32:46.583: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:32:52.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 15 23:33:05.133: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.133: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.162427 6 log.go:172] (0xc00259e420) (0xc002022500) Create stream I0315 23:33:05.162456 6 log.go:172] (0xc00259e420) (0xc002022500) Stream added, broadcasting: 1 I0315 23:33:05.163823 6 log.go:172] (0xc00259e420) Reply frame received for 1 I0315 23:33:05.163851 6 log.go:172] (0xc00259e420) (0xc001820f00) Create stream I0315 23:33:05.163860 6 log.go:172] (0xc00259e420) (0xc001820f00) Stream added, broadcasting: 3 I0315 23:33:05.164703 6 log.go:172] (0xc00259e420) Reply frame received for 3 I0315 23:33:05.164751 6 log.go:172] (0xc00259e420) (0xc0018050e0) Create stream I0315 23:33:05.164769 6 log.go:172] (0xc00259e420) (0xc0018050e0) Stream added, broadcasting: 5 I0315 23:33:05.165580 6 log.go:172] (0xc00259e420) Reply frame received for 5 I0315 23:33:05.224240 6 log.go:172] (0xc00259e420) Data frame received for 3 I0315 23:33:05.224277 6 log.go:172] (0xc001820f00) (3) Data frame handling I0315 23:33:05.224292 6 log.go:172] (0xc001820f00) (3) Data frame sent I0315 23:33:05.224302 6 log.go:172] (0xc00259e420) Data frame received for 3 I0315 23:33:05.224310 6 log.go:172] (0xc001820f00) (3) Data frame handling I0315 23:33:05.224336 6 log.go:172] (0xc00259e420) Data frame received for 5 I0315 23:33:05.224346 6 log.go:172] (0xc0018050e0) (5) Data frame handling I0315 23:33:05.225821 6 log.go:172] (0xc00259e420) Data frame received for 1 I0315 23:33:05.225838 6 log.go:172] (0xc002022500) (1) Data frame handling I0315 23:33:05.225847 6 log.go:172] (0xc002022500) (1) Data frame sent I0315 23:33:05.225862 6 log.go:172] (0xc00259e420) (0xc002022500) Stream removed, broadcasting: 1 I0315 23:33:05.225939 6 log.go:172] (0xc00259e420) Go away received I0315 23:33:05.225985 6 log.go:172] (0xc00259e420) (0xc002022500) Stream removed, broadcasting: 1 I0315 23:33:05.226010 6 log.go:172] (0xc00259e420) (0xc001820f00) Stream removed, broadcasting: 3 I0315 23:33:05.226024 6 log.go:172] (0xc00259e420) (0xc0018050e0) Stream removed, broadcasting: 5 Mar 15 23:33:05.226: INFO: Exec stderr: "" Mar 15 23:33:05.226: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.226: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.254603 6 log.go:172] (0xc00259e8f0) (0xc002022820) Create stream I0315 23:33:05.254650 6 log.go:172] (0xc00259e8f0) (0xc002022820) Stream added, broadcasting: 1 I0315 23:33:05.257534 6 log.go:172] (0xc00259e8f0) Reply frame received for 1 I0315 23:33:05.257571 6 log.go:172] (0xc00259e8f0) (0xc0019fc500) Create stream I0315 23:33:05.257585 6 log.go:172] (0xc00259e8f0) (0xc0019fc500) Stream added, broadcasting: 3 I0315 23:33:05.258846 6 log.go:172] (0xc00259e8f0) Reply frame received for 3 I0315 23:33:05.258893 6 log.go:172] (0xc00259e8f0) (0xc001820fa0) Create stream I0315 23:33:05.258918 6 log.go:172] (0xc00259e8f0) (0xc001820fa0) Stream added, broadcasting: 5 I0315 23:33:05.259773 6 log.go:172] (0xc00259e8f0) Reply frame received for 5 I0315 23:33:05.314704 6 log.go:172] (0xc00259e8f0) Data frame received for 5 I0315 23:33:05.314742 6 log.go:172] (0xc001820fa0) (5) Data frame handling I0315 23:33:05.314793 6 log.go:172] (0xc00259e8f0) Data frame received for 3 I0315 23:33:05.314821 6 log.go:172] (0xc0019fc500) (3) Data frame handling I0315 23:33:05.314839 6 log.go:172] (0xc0019fc500) (3) Data frame sent I0315 23:33:05.314849 6 log.go:172] (0xc00259e8f0) Data frame received for 3 I0315 23:33:05.314854 6 log.go:172] (0xc0019fc500) (3) Data frame handling I0315 23:33:05.315883 6 log.go:172] (0xc00259e8f0) Data frame received for 1 I0315 23:33:05.315898 6 log.go:172] (0xc002022820) (1) Data frame handling I0315 23:33:05.315913 6 log.go:172] (0xc002022820) (1) Data frame sent I0315 23:33:05.315971 6 log.go:172] (0xc00259e8f0) (0xc002022820) Stream removed, broadcasting: 1 I0315 23:33:05.316037 6 log.go:172] (0xc00259e8f0) Go away received I0315 23:33:05.316091 6 log.go:172] (0xc00259e8f0) (0xc002022820) Stream removed, broadcasting: 1 I0315 23:33:05.316156 6 log.go:172] (0xc00259e8f0) (0xc0019fc500) Stream removed, broadcasting: 3 I0315 23:33:05.316174 6 log.go:172] (0xc00259e8f0) (0xc001820fa0) Stream removed, broadcasting: 5 Mar 15 23:33:05.316: INFO: Exec stderr: "" Mar 15 23:33:05.316: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.316: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.342944 6 log.go:172] (0xc000c9a580) (0xc001805360) Create stream I0315 23:33:05.342978 6 log.go:172] (0xc000c9a580) (0xc001805360) Stream added, broadcasting: 1 I0315 23:33:05.351698 6 log.go:172] (0xc000c9a580) Reply frame received for 1 I0315 23:33:05.351742 6 log.go:172] (0xc000c9a580) (0xc001976000) Create stream I0315 23:33:05.351754 6 log.go:172] (0xc000c9a580) (0xc001976000) Stream added, broadcasting: 3 I0315 23:33:05.352548 6 log.go:172] (0xc000c9a580) Reply frame received for 3 I0315 23:33:05.352593 6 log.go:172] (0xc000c9a580) (0xc001976140) Create stream I0315 23:33:05.352604 6 log.go:172] (0xc000c9a580) (0xc001976140) Stream added, broadcasting: 5 I0315 23:33:05.353513 6 log.go:172] (0xc000c9a580) Reply frame received for 5 I0315 23:33:05.431981 6 log.go:172] (0xc000c9a580) Data frame received for 5 I0315 23:33:05.432007 6 log.go:172] (0xc001976140) (5) Data frame handling I0315 23:33:05.432043 6 log.go:172] (0xc000c9a580) Data frame received for 3 I0315 23:33:05.432202 6 log.go:172] (0xc001976000) (3) Data frame handling I0315 23:33:05.432253 6 log.go:172] (0xc001976000) (3) Data frame sent I0315 23:33:05.432298 6 log.go:172] (0xc000c9a580) Data frame received for 3 I0315 23:33:05.432336 6 log.go:172] (0xc001976000) (3) Data frame handling I0315 23:33:05.434013 6 log.go:172] (0xc000c9a580) Data frame received for 1 I0315 23:33:05.434037 6 log.go:172] (0xc001805360) (1) Data frame handling I0315 23:33:05.434060 6 log.go:172] (0xc001805360) (1) Data frame sent I0315 23:33:05.434084 6 log.go:172] (0xc000c9a580) (0xc001805360) Stream removed, broadcasting: 1 I0315 23:33:05.434115 6 log.go:172] (0xc000c9a580) Go away received I0315 23:33:05.434223 6 log.go:172] (0xc000c9a580) (0xc001805360) Stream removed, broadcasting: 1 I0315 23:33:05.434250 6 log.go:172] (0xc000c9a580) (0xc001976000) Stream removed, broadcasting: 3 I0315 23:33:05.434262 6 log.go:172] (0xc000c9a580) (0xc001976140) Stream removed, broadcasting: 5 Mar 15 23:33:05.434: INFO: Exec stderr: "" Mar 15 23:33:05.434: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.434: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.467432 6 log.go:172] (0xc000f46370) (0xc0026741e0) Create stream I0315 23:33:05.467457 6 log.go:172] (0xc000f46370) (0xc0026741e0) Stream added, broadcasting: 1 I0315 23:33:05.469535 6 log.go:172] (0xc000f46370) Reply frame received for 1 I0315 23:33:05.469579 6 log.go:172] (0xc000f46370) (0xc000d280a0) Create stream I0315 23:33:05.469590 6 log.go:172] (0xc000f46370) (0xc000d280a0) Stream added, broadcasting: 3 I0315 23:33:05.470482 6 log.go:172] (0xc000f46370) Reply frame received for 3 I0315 23:33:05.470521 6 log.go:172] (0xc000f46370) (0xc0016920a0) Create stream I0315 23:33:05.470537 6 log.go:172] (0xc000f46370) (0xc0016920a0) Stream added, broadcasting: 5 I0315 23:33:05.471205 6 log.go:172] (0xc000f46370) Reply frame received for 5 I0315 23:33:05.528433 6 log.go:172] (0xc000f46370) Data frame received for 3 I0315 23:33:05.528478 6 log.go:172] (0xc000d280a0) (3) Data frame handling I0315 23:33:05.528493 6 log.go:172] (0xc000d280a0) (3) Data frame sent I0315 23:33:05.528506 6 log.go:172] (0xc000f46370) Data frame received for 3 I0315 23:33:05.528518 6 log.go:172] (0xc000d280a0) (3) Data frame handling I0315 23:33:05.528560 6 log.go:172] (0xc000f46370) Data frame received for 5 I0315 23:33:05.528586 6 log.go:172] (0xc0016920a0) (5) Data frame handling I0315 23:33:05.529895 6 log.go:172] (0xc000f46370) Data frame received for 1 I0315 23:33:05.529926 6 log.go:172] (0xc0026741e0) (1) Data frame handling I0315 23:33:05.529951 6 log.go:172] (0xc0026741e0) (1) Data frame sent I0315 23:33:05.529971 6 log.go:172] (0xc000f46370) (0xc0026741e0) Stream removed, broadcasting: 1 I0315 23:33:05.530011 6 log.go:172] (0xc000f46370) Go away received I0315 23:33:05.530076 6 log.go:172] (0xc000f46370) (0xc0026741e0) Stream removed, broadcasting: 1 I0315 23:33:05.530096 6 log.go:172] (0xc000f46370) (0xc000d280a0) Stream removed, broadcasting: 3 I0315 23:33:05.530111 6 log.go:172] (0xc000f46370) (0xc0016920a0) Stream removed, broadcasting: 5 Mar 15 23:33:05.530: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 15 23:33:05.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.530: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.559292 6 log.go:172] (0xc000c9a420) (0xc000d28460) Create stream I0315 23:33:05.559316 6 log.go:172] (0xc000c9a420) (0xc000d28460) Stream added, broadcasting: 1 I0315 23:33:05.560991 6 log.go:172] (0xc000c9a420) Reply frame received for 1 I0315 23:33:05.561032 6 log.go:172] (0xc000c9a420) (0xc000d28500) Create stream I0315 23:33:05.561045 6 log.go:172] (0xc000c9a420) (0xc000d28500) Stream added, broadcasting: 3 I0315 23:33:05.561992 6 log.go:172] (0xc000c9a420) Reply frame received for 3 I0315 23:33:05.562019 6 log.go:172] (0xc000c9a420) (0xc002674280) Create stream I0315 23:33:05.562027 6 log.go:172] (0xc000c9a420) (0xc002674280) Stream added, broadcasting: 5 I0315 23:33:05.562731 6 log.go:172] (0xc000c9a420) Reply frame received for 5 I0315 23:33:05.632299 6 log.go:172] (0xc000c9a420) Data frame received for 5 I0315 23:33:05.632350 6 log.go:172] (0xc002674280) (5) Data frame handling I0315 23:33:05.632382 6 log.go:172] (0xc000c9a420) Data frame received for 3 I0315 23:33:05.632395 6 log.go:172] (0xc000d28500) (3) Data frame handling I0315 23:33:05.632410 6 log.go:172] (0xc000d28500) (3) Data frame sent I0315 23:33:05.632423 6 log.go:172] (0xc000c9a420) Data frame received for 3 I0315 23:33:05.632434 6 log.go:172] (0xc000d28500) (3) Data frame handling I0315 23:33:05.634252 6 log.go:172] (0xc000c9a420) Data frame received for 1 I0315 23:33:05.634281 6 log.go:172] (0xc000d28460) (1) Data frame handling I0315 23:33:05.634292 6 log.go:172] (0xc000d28460) (1) Data frame sent I0315 23:33:05.634299 6 log.go:172] (0xc000c9a420) (0xc000d28460) Stream removed, broadcasting: 1 I0315 23:33:05.634316 6 log.go:172] (0xc000c9a420) Go away received I0315 23:33:05.634400 6 log.go:172] (0xc000c9a420) (0xc000d28460) Stream removed, broadcasting: 1 I0315 23:33:05.634421 6 log.go:172] (0xc000c9a420) (0xc000d28500) Stream removed, broadcasting: 3 I0315 23:33:05.634433 6 log.go:172] (0xc000c9a420) (0xc002674280) Stream removed, broadcasting: 5 Mar 15 23:33:05.634: INFO: Exec stderr: "" Mar 15 23:33:05.634: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.634: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.662294 6 log.go:172] (0xc000dc3290) (0xc001cfe1e0) Create stream I0315 23:33:05.662323 6 log.go:172] (0xc000dc3290) (0xc001cfe1e0) Stream added, broadcasting: 1 I0315 23:33:05.664380 6 log.go:172] (0xc000dc3290) Reply frame received for 1 I0315 23:33:05.664406 6 log.go:172] (0xc000dc3290) (0xc001692140) Create stream I0315 23:33:05.664416 6 log.go:172] (0xc000dc3290) (0xc001692140) Stream added, broadcasting: 3 I0315 23:33:05.665333 6 log.go:172] (0xc000dc3290) Reply frame received for 3 I0315 23:33:05.665363 6 log.go:172] (0xc000dc3290) (0xc0016921e0) Create stream I0315 23:33:05.665373 6 log.go:172] (0xc000dc3290) (0xc0016921e0) Stream added, broadcasting: 5 I0315 23:33:05.666077 6 log.go:172] (0xc000dc3290) Reply frame received for 5 I0315 23:33:05.725041 6 log.go:172] (0xc000dc3290) Data frame received for 5 I0315 23:33:05.725091 6 log.go:172] (0xc0016921e0) (5) Data frame handling I0315 23:33:05.725244 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:33:05.725281 6 log.go:172] (0xc001692140) (3) Data frame handling I0315 23:33:05.725315 6 log.go:172] (0xc001692140) (3) Data frame sent I0315 23:33:05.725328 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:33:05.725337 6 log.go:172] (0xc001692140) (3) Data frame handling I0315 23:33:05.726813 6 log.go:172] (0xc000dc3290) Data frame received for 1 I0315 23:33:05.726828 6 log.go:172] (0xc001cfe1e0) (1) Data frame handling I0315 23:33:05.726837 6 log.go:172] (0xc001cfe1e0) (1) Data frame sent I0315 23:33:05.726853 6 log.go:172] (0xc000dc3290) (0xc001cfe1e0) Stream removed, broadcasting: 1 I0315 23:33:05.726876 6 log.go:172] (0xc000dc3290) Go away received I0315 23:33:05.726984 6 log.go:172] (0xc000dc3290) (0xc001cfe1e0) Stream removed, broadcasting: 1 I0315 23:33:05.727011 6 log.go:172] (0xc000dc3290) (0xc001692140) Stream removed, broadcasting: 3 I0315 23:33:05.727023 6 log.go:172] (0xc000dc3290) (0xc0016921e0) Stream removed, broadcasting: 5 Mar 15 23:33:05.727: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 15 23:33:05.727: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.727: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.760963 6 log.go:172] (0xc000dc3760) (0xc001cfe460) Create stream I0315 23:33:05.761070 6 log.go:172] (0xc000dc3760) (0xc001cfe460) Stream added, broadcasting: 1 I0315 23:33:05.763880 6 log.go:172] (0xc000dc3760) Reply frame received for 1 I0315 23:33:05.763956 6 log.go:172] (0xc000dc3760) (0xc000d285a0) Create stream I0315 23:33:05.763994 6 log.go:172] (0xc000dc3760) (0xc000d285a0) Stream added, broadcasting: 3 I0315 23:33:05.764978 6 log.go:172] (0xc000dc3760) Reply frame received for 3 I0315 23:33:05.765012 6 log.go:172] (0xc000dc3760) (0xc001692280) Create stream I0315 23:33:05.765033 6 log.go:172] (0xc000dc3760) (0xc001692280) Stream added, broadcasting: 5 I0315 23:33:05.765927 6 log.go:172] (0xc000dc3760) Reply frame received for 5 I0315 23:33:05.821554 6 log.go:172] (0xc000dc3760) Data frame received for 3 I0315 23:33:05.821611 6 log.go:172] (0xc000d285a0) (3) Data frame handling I0315 23:33:05.821649 6 log.go:172] (0xc000d285a0) (3) Data frame sent I0315 23:33:05.821709 6 log.go:172] (0xc000dc3760) Data frame received for 3 I0315 23:33:05.821730 6 log.go:172] (0xc000d285a0) (3) Data frame handling I0315 23:33:05.821868 6 log.go:172] (0xc000dc3760) Data frame received for 5 I0315 23:33:05.821883 6 log.go:172] (0xc001692280) (5) Data frame handling I0315 23:33:05.823203 6 log.go:172] (0xc000dc3760) Data frame received for 1 I0315 23:33:05.823224 6 log.go:172] (0xc001cfe460) (1) Data frame handling I0315 23:33:05.823239 6 log.go:172] (0xc001cfe460) (1) Data frame sent I0315 23:33:05.823251 6 log.go:172] (0xc000dc3760) (0xc001cfe460) Stream removed, broadcasting: 1 I0315 23:33:05.823291 6 log.go:172] (0xc000dc3760) Go away received I0315 23:33:05.823436 6 log.go:172] (0xc000dc3760) (0xc001cfe460) Stream removed, broadcasting: 1 I0315 23:33:05.823463 6 log.go:172] (0xc000dc3760) (0xc000d285a0) Stream removed, broadcasting: 3 I0315 23:33:05.823488 6 log.go:172] (0xc000dc3760) (0xc001692280) Stream removed, broadcasting: 5 Mar 15 23:33:05.823: INFO: Exec stderr: "" Mar 15 23:33:05.823: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.823: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.888430 6 log.go:172] (0xc00259e2c0) (0xc001692500) Create stream I0315 23:33:05.888481 6 log.go:172] (0xc00259e2c0) (0xc001692500) Stream added, broadcasting: 1 I0315 23:33:05.891697 6 log.go:172] (0xc00259e2c0) Reply frame received for 1 I0315 23:33:05.891746 6 log.go:172] (0xc00259e2c0) (0xc001976280) Create stream I0315 23:33:05.891761 6 log.go:172] (0xc00259e2c0) (0xc001976280) Stream added, broadcasting: 3 I0315 23:33:05.892855 6 log.go:172] (0xc00259e2c0) Reply frame received for 3 I0315 23:33:05.892890 6 log.go:172] (0xc00259e2c0) (0xc001cfe500) Create stream I0315 23:33:05.892903 6 log.go:172] (0xc00259e2c0) (0xc001cfe500) Stream added, broadcasting: 5 I0315 23:33:05.893995 6 log.go:172] (0xc00259e2c0) Reply frame received for 5 I0315 23:33:05.956805 6 log.go:172] (0xc00259e2c0) Data frame received for 3 I0315 23:33:05.956847 6 log.go:172] (0xc001976280) (3) Data frame handling I0315 23:33:05.956879 6 log.go:172] (0xc001976280) (3) Data frame sent I0315 23:33:05.956904 6 log.go:172] (0xc00259e2c0) Data frame received for 3 I0315 23:33:05.956919 6 log.go:172] (0xc001976280) (3) Data frame handling I0315 23:33:05.956992 6 log.go:172] (0xc00259e2c0) Data frame received for 5 I0315 23:33:05.957021 6 log.go:172] (0xc001cfe500) (5) Data frame handling I0315 23:33:05.958571 6 log.go:172] (0xc00259e2c0) Data frame received for 1 I0315 23:33:05.958602 6 log.go:172] (0xc001692500) (1) Data frame handling I0315 23:33:05.958625 6 log.go:172] (0xc001692500) (1) Data frame sent I0315 23:33:05.958640 6 log.go:172] (0xc00259e2c0) (0xc001692500) Stream removed, broadcasting: 1 I0315 23:33:05.958655 6 log.go:172] (0xc00259e2c0) Go away received I0315 23:33:05.958831 6 log.go:172] (0xc00259e2c0) (0xc001692500) Stream removed, broadcasting: 1 I0315 23:33:05.958857 6 log.go:172] (0xc00259e2c0) (0xc001976280) Stream removed, broadcasting: 3 I0315 23:33:05.958871 6 log.go:172] (0xc00259e2c0) (0xc001cfe500) Stream removed, broadcasting: 5 Mar 15 23:33:05.958: INFO: Exec stderr: "" Mar 15 23:33:05.958: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:05.958: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:05.988498 6 log.go:172] (0xc000c9a9a0) (0xc000d28a00) Create stream I0315 23:33:05.988546 6 log.go:172] (0xc000c9a9a0) (0xc000d28a00) Stream added, broadcasting: 1 I0315 23:33:05.990743 6 log.go:172] (0xc000c9a9a0) Reply frame received for 1 I0315 23:33:05.990782 6 log.go:172] (0xc000c9a9a0) (0xc002674320) Create stream I0315 23:33:05.990794 6 log.go:172] (0xc000c9a9a0) (0xc002674320) Stream added, broadcasting: 3 I0315 23:33:05.991613 6 log.go:172] (0xc000c9a9a0) Reply frame received for 3 I0315 23:33:05.991646 6 log.go:172] (0xc000c9a9a0) (0xc001976320) Create stream I0315 23:33:05.991660 6 log.go:172] (0xc000c9a9a0) (0xc001976320) Stream added, broadcasting: 5 I0315 23:33:05.992399 6 log.go:172] (0xc000c9a9a0) Reply frame received for 5 I0315 23:33:06.044696 6 log.go:172] (0xc000c9a9a0) Data frame received for 3 I0315 23:33:06.044723 6 log.go:172] (0xc002674320) (3) Data frame handling I0315 23:33:06.044731 6 log.go:172] (0xc002674320) (3) Data frame sent I0315 23:33:06.044752 6 log.go:172] (0xc000c9a9a0) Data frame received for 5 I0315 23:33:06.044819 6 log.go:172] (0xc001976320) (5) Data frame handling I0315 23:33:06.044867 6 log.go:172] (0xc000c9a9a0) Data frame received for 3 I0315 23:33:06.044908 6 log.go:172] (0xc002674320) (3) Data frame handling I0315 23:33:06.046804 6 log.go:172] (0xc000c9a9a0) Data frame received for 1 I0315 23:33:06.046827 6 log.go:172] (0xc000d28a00) (1) Data frame handling I0315 23:33:06.046851 6 log.go:172] (0xc000d28a00) (1) Data frame sent I0315 23:33:06.046869 6 log.go:172] (0xc000c9a9a0) (0xc000d28a00) Stream removed, broadcasting: 1 I0315 23:33:06.046939 6 log.go:172] (0xc000c9a9a0) Go away received I0315 23:33:06.047027 6 log.go:172] (0xc000c9a9a0) (0xc000d28a00) Stream removed, broadcasting: 1 I0315 23:33:06.047045 6 log.go:172] (0xc000c9a9a0) (0xc002674320) Stream removed, broadcasting: 3 I0315 23:33:06.047060 6 log.go:172] (0xc000c9a9a0) (0xc001976320) Stream removed, broadcasting: 5 Mar 15 23:33:06.047: INFO: Exec stderr: "" Mar 15 23:33:06.047: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-8dbhj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:33:06.047: INFO: >>> kubeConfig: /root/.kube/config I0315 23:33:06.083314 6 log.go:172] (0xc000f46840) (0xc002674640) Create stream I0315 23:33:06.083567 6 log.go:172] (0xc000f46840) (0xc002674640) Stream added, broadcasting: 1 I0315 23:33:06.093807 6 log.go:172] (0xc000f46840) Reply frame received for 1 I0315 23:33:06.093872 6 log.go:172] (0xc000f46840) (0xc000d28aa0) Create stream I0315 23:33:06.093889 6 log.go:172] (0xc000f46840) (0xc000d28aa0) Stream added, broadcasting: 3 I0315 23:33:06.094797 6 log.go:172] (0xc000f46840) Reply frame received for 3 I0315 23:33:06.094834 6 log.go:172] (0xc000f46840) (0xc001976460) Create stream I0315 23:33:06.094870 6 log.go:172] (0xc000f46840) (0xc001976460) Stream added, broadcasting: 5 I0315 23:33:06.095874 6 log.go:172] (0xc000f46840) Reply frame received for 5 I0315 23:33:06.146720 6 log.go:172] (0xc000f46840) Data frame received for 5 I0315 23:33:06.146775 6 log.go:172] (0xc001976460) (5) Data frame handling I0315 23:33:06.146821 6 log.go:172] (0xc000f46840) Data frame received for 3 I0315 23:33:06.146840 6 log.go:172] (0xc000d28aa0) (3) Data frame handling I0315 23:33:06.146861 6 log.go:172] (0xc000d28aa0) (3) Data frame sent I0315 23:33:06.146883 6 log.go:172] (0xc000f46840) Data frame received for 3 I0315 23:33:06.146902 6 log.go:172] (0xc000d28aa0) (3) Data frame handling I0315 23:33:06.148585 6 log.go:172] (0xc000f46840) Data frame received for 1 I0315 23:33:06.148620 6 log.go:172] (0xc002674640) (1) Data frame handling I0315 23:33:06.148649 6 log.go:172] (0xc002674640) (1) Data frame sent I0315 23:33:06.148670 6 log.go:172] (0xc000f46840) (0xc002674640) Stream removed, broadcasting: 1 I0315 23:33:06.148694 6 log.go:172] (0xc000f46840) Go away received I0315 23:33:06.148829 6 log.go:172] (0xc000f46840) (0xc002674640) Stream removed, broadcasting: 1 I0315 23:33:06.148852 6 log.go:172] (0xc000f46840) (0xc000d28aa0) Stream removed, broadcasting: 3 I0315 23:33:06.148872 6 log.go:172] (0xc000f46840) (0xc001976460) Stream removed, broadcasting: 5 Mar 15 23:33:06.148: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:33:06.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-8dbhj" for this suite. Mar 15 23:33:56.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:33:56.398: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-8dbhj, resource: bindings, ignored listing per whitelist Mar 15 23:33:56.408: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-8dbhj deletion completed in 50.254045355s • [SLOW TEST:63.629 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:33:56.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:33:56.527: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 15 23:34:01.532: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 15 23:34:01.532: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 15 23:34:03.536: INFO: Creating deployment "test-rollover-deployment" Mar 15 23:34:03.549: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 15 23:34:05.556: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 15 23:34:05.562: INFO: Ensure that both replica sets have 1 created replica Mar 15 23:34:05.567: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 15 23:34:05.574: INFO: Updating deployment test-rollover-deployment Mar 15 23:34:05.574: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 15 23:34:07.602: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 15 23:34:07.609: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 15 23:34:08.076: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:08.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:10.084: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:10.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912045, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:12.084: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:12.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912051, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:14.083: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:14.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912051, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:16.085: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:16.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912051, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:18.083: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:18.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912051, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:20.083: INFO: all replica sets need to contain the pod-template-hash label Mar 15 23:34:20.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912051, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:34:22.151: INFO: Mar 15 23:34:22.151: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 23:34:22.159: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-ffl5q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffl5q/deployments/test-rollover-deployment,UID:7489c8ea-6715-11ea-99e8-0242ac110002,ResourceVersion:51058,Generation:2,CreationTimestamp:2020-03-15 23:34:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-15 23:34:03 +0000 UTC 2020-03-15 23:34:03 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-15 23:34:22 +0000 UTC 2020-03-15 23:34:03 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 15 23:34:22.162: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-ffl5q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffl5q/replicasets/test-rollover-deployment-5b8479fdb6,UID:75c08e69-6715-11ea-99e8-0242ac110002,ResourceVersion:51047,Generation:2,CreationTimestamp:2020-03-15 23:34:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7489c8ea-6715-11ea-99e8-0242ac110002 0xc001cac6c7 0xc001cac6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 15 23:34:22.162: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 15 23:34:22.163: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-ffl5q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffl5q/replicasets/test-rollover-controller,UID:7057f67d-6715-11ea-99e8-0242ac110002,ResourceVersion:51057,Generation:2,CreationTimestamp:2020-03-15 23:33:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7489c8ea-6715-11ea-99e8-0242ac110002 0xc001cac3c7 0xc001cac3c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:34:22.163: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-ffl5q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffl5q/replicasets/test-rollover-deployment-58494b7559,UID:748c9a7f-6715-11ea-99e8-0242ac110002,ResourceVersion:51010,Generation:2,CreationTimestamp:2020-03-15 23:34:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7489c8ea-6715-11ea-99e8-0242ac110002 0xc001cac487 0xc001cac488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:34:22.166: INFO: Pod "test-rollover-deployment-5b8479fdb6-qc29q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-qc29q,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-ffl5q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ffl5q/pods/test-rollover-deployment-5b8479fdb6-qc29q,UID:75cc54ab-6715-11ea-99e8-0242ac110002,ResourceVersion:51026,Generation:0,CreationTimestamp:2020-03-15 23:34:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 75c08e69-6715-11ea-99e8-0242ac110002 0xc001cadc27 0xc001cadc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s7tzq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s7tzq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s7tzq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cadca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cadcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:34:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:34:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:34:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:34:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.21,StartTime:2020-03-15 23:34:05 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-15 23:34:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2dd3050d4b0f36b019d784e54d2737be04878f9f818af8b59c22932bcfcafdf3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:34:22.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-ffl5q" for this suite. Mar 15 23:34:28.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:34:28.354: INFO: namespace: e2e-tests-deployment-ffl5q, resource: bindings, ignored listing per whitelist Mar 15 23:34:28.386: INFO: namespace e2e-tests-deployment-ffl5q deletion completed in 6.216967565s • [SLOW TEST:31.978 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:34:28.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 15 23:34:28.681: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 15 23:34:28.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:29.199: INFO: stderr: "" Mar 15 23:34:29.199: INFO: stdout: "service/redis-slave created\n" Mar 15 23:34:29.199: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 15 23:34:29.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:29.540: INFO: stderr: "" Mar 15 23:34:29.540: INFO: stdout: "service/redis-master created\n" Mar 15 23:34:29.540: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 15 23:34:29.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:29.875: INFO: stderr: "" Mar 15 23:34:29.875: INFO: stdout: "service/frontend created\n" Mar 15 23:34:29.875: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 15 23:34:29.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:30.107: INFO: stderr: "" Mar 15 23:34:30.107: INFO: stdout: "deployment.extensions/frontend created\n" Mar 15 23:34:30.108: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 15 23:34:30.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:30.520: INFO: stderr: "" Mar 15 23:34:30.520: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 15 23:34:30.520: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 15 23:34:30.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:30.845: INFO: stderr: "" Mar 15 23:34:30.845: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 15 23:34:30.845: INFO: Waiting for all frontend pods to be Running. Mar 15 23:34:40.896: INFO: Waiting for frontend to serve content. Mar 15 23:34:40.911: INFO: Trying to add a new entry to the guestbook. Mar 15 23:34:40.927: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 15 23:34:40.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.089: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.089: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 15 23:34:41.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.273: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.273: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 15 23:34:41.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.433: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 15 23:34:41.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.551: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 15 23:34:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.720: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 15 23:34:41.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rsngb' Mar 15 23:34:41.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:34:41.884: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:34:41.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rsngb" for this suite. Mar 15 23:35:22.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:35:22.530: INFO: namespace: e2e-tests-kubectl-rsngb, resource: bindings, ignored listing per whitelist Mar 15 23:35:22.534: INFO: namespace e2e-tests-kubectl-rsngb deletion completed in 40.418063105s • [SLOW TEST:54.148 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:35:22.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:35:22.766: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:35:26.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-cgpvf" for this suite. Mar 15 23:36:16.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:36:17.200: INFO: namespace: e2e-tests-pods-cgpvf, resource: bindings, ignored listing per whitelist Mar 15 23:36:17.233: INFO: namespace e2e-tests-pods-cgpvf deletion completed in 50.318200579s • [SLOW TEST:54.698 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:36:17.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 15 23:36:17.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7ld8d' Mar 15 23:36:17.554: INFO: stderr: "" Mar 15 23:36:17.554: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 15 23:36:22.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7ld8d -o json' Mar 15 23:36:22.698: INFO: stderr: "" Mar 15 23:36:22.698: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-15T23:36:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-7ld8d\",\n \"resourceVersion\": \"51544\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-7ld8d/pods/e2e-test-nginx-pod\",\n \"uid\": \"c468441a-6715-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vhgbj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vhgbj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vhgbj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T23:36:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T23:36:21Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T23:36:21Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-15T23:36:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a14b3b91df10520584bbfa5b025f9ce2558e81fdf881fc5492be6dbe5189b58b\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-15T23:36:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-15T23:36:17Z\"\n }\n}\n" STEP: replace the image in the pod Mar 15 23:36:22.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-7ld8d' Mar 15 23:36:23.334: INFO: stderr: "" Mar 15 23:36:23.334: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 15 23:36:23.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7ld8d' Mar 15 23:36:31.734: INFO: stderr: "" Mar 15 23:36:31.734: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:36:31.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7ld8d" for this suite. Mar 15 23:36:37.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:36:37.798: INFO: namespace: e2e-tests-kubectl-7ld8d, resource: bindings, ignored listing per whitelist Mar 15 23:36:37.847: INFO: namespace e2e-tests-kubectl-7ld8d deletion completed in 6.104288524s • [SLOW TEST:20.614 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:36:37.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:36:37.942: INFO: Creating deployment "test-recreate-deployment" Mar 15 23:36:37.958: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 15 23:36:37.967: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 15 23:36:39.974: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 15 23:36:39.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912197, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912197, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912198, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719912197, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 15 23:36:41.980: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 15 23:36:41.985: INFO: Updating deployment test-recreate-deployment Mar 15 23:36:41.985: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 15 23:36:42.202: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-zhp28,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zhp28/deployments/test-recreate-deployment,UID:d0922b8b-6715-11ea-99e8-0242ac110002,ResourceVersion:51651,Generation:2,CreationTimestamp:2020-03-15 23:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-15 23:36:42 +0000 UTC 2020-03-15 23:36:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-15 23:36:42 +0000 UTC 2020-03-15 23:36:37 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 15 23:36:42.205: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-zhp28,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zhp28/replicasets/test-recreate-deployment-589c4bfd,UID:d3086526-6715-11ea-99e8-0242ac110002,ResourceVersion:51648,Generation:1,CreationTimestamp:2020-03-15 23:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d0922b8b-6715-11ea-99e8-0242ac110002 0xc00226fc8f 0xc00226fcb0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:36:42.205: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 15 23:36:42.205: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-zhp28,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zhp28/replicasets/test-recreate-deployment-5bf7f65dc,UID:d095a45a-6715-11ea-99e8-0242ac110002,ResourceVersion:51640,Generation:2,CreationTimestamp:2020-03-15 23:36:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d0922b8b-6715-11ea-99e8-0242ac110002 0xc00226fe00 0xc00226fe01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 15 23:36:42.208: INFO: Pod "test-recreate-deployment-589c4bfd-j2m9j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-j2m9j,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-zhp28,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zhp28/pods/test-recreate-deployment-589c4bfd-j2m9j,UID:d309c16d-6715-11ea-99e8-0242ac110002,ResourceVersion:51652,Generation:0,CreationTimestamp:2020-03-15 23:36:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd d3086526-6715-11ea-99e8-0242ac110002 0xc00231e5cf 0xc00231e5e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-46mpr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-46mpr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-46mpr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00231e660} {node.kubernetes.io/unreachable Exists NoExecute 0xc00231e680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:36:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:36:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:36:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 23:36:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-03-15 23:36:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:36:42.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zhp28" for this suite. Mar 15 23:36:48.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:36:48.311: INFO: namespace: e2e-tests-deployment-zhp28, resource: bindings, ignored listing per whitelist Mar 15 23:36:48.363: INFO: namespace e2e-tests-deployment-zhp28 deletion completed in 6.152198889s • [SLOW TEST:10.515 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:36:48.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:36:48.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-jdsz4" to be "success or failure" Mar 15 23:36:48.488: INFO: Pod "downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 42.449675ms Mar 15 23:36:50.492: INFO: Pod "downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046638371s Mar 15 23:36:52.661: INFO: Pod "downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215723197s STEP: Saw pod success Mar 15 23:36:52.661: INFO: Pod "downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:36:52.664: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:36:52.899: INFO: Waiting for pod downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013 to disappear Mar 15 23:36:52.929: INFO: Pod downwardapi-volume-d6d3b4de-6715-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:36:52.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jdsz4" for this suite. Mar 15 23:36:58.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:36:59.032: INFO: namespace: e2e-tests-projected-jdsz4, resource: bindings, ignored listing per whitelist Mar 15 23:36:59.044: INFO: namespace e2e-tests-projected-jdsz4 deletion completed in 6.111901465s • [SLOW TEST:10.681 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:36:59.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013 Mar 15 23:36:59.252: INFO: Pod name my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013: Found 0 pods out of 1 Mar 15 23:37:04.257: INFO: Pod name my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013: Found 1 pods out of 1 Mar 15 23:37:04.257: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013" are running Mar 15 23:37:04.260: INFO: Pod "my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013-n9rx4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:36:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:36:59 +0000 UTC Reason: Message:}]) Mar 15 23:37:04.260: INFO: Trying to dial the pod Mar 15 23:37:09.272: INFO: Controller my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013: Got expected result from replica 1 [my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013-n9rx4]: "my-hostname-basic-dd392236-6715-11ea-811c-0242ac110013-n9rx4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:37:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-wj9b5" for this suite. Mar 15 23:37:15.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:37:15.322: INFO: namespace: e2e-tests-replication-controller-wj9b5, resource: bindings, ignored listing per whitelist Mar 15 23:37:15.366: INFO: namespace e2e-tests-replication-controller-wj9b5 deletion completed in 6.089560932s • [SLOW TEST:16.322 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:37:15.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:37:15.450: INFO: Creating ReplicaSet my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013 Mar 15 23:37:15.503: INFO: Pod name my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013: Found 0 pods out of 1 Mar 15 23:37:20.509: INFO: Pod name my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013: Found 1 pods out of 1 Mar 15 23:37:20.509: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013" is running Mar 15 23:37:20.512: INFO: Pod "my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013-mppmf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-15 23:37:15 +0000 UTC Reason: Message:}]) Mar 15 23:37:20.512: INFO: Trying to dial the pod Mar 15 23:37:25.524: INFO: Controller my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013: Got expected result from replica 1 [my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013-mppmf]: "my-hostname-basic-e6ed4d32-6715-11ea-811c-0242ac110013-mppmf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:37:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-7ljrb" for this suite. Mar 15 23:37:31.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:37:31.633: INFO: namespace: e2e-tests-replicaset-7ljrb, resource: bindings, ignored listing per whitelist Mar 15 23:37:31.645: INFO: namespace e2e-tests-replicaset-7ljrb deletion completed in 6.116531909s • [SLOW TEST:16.279 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:37:31.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 15 23:37:31.765: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.628069ms) Mar 15 23:37:31.770: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.156388ms) Mar 15 23:37:31.773: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.479131ms) Mar 15 23:37:31.777: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.283361ms) Mar 15 23:37:31.780: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.466124ms) Mar 15 23:37:31.784: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.522076ms) Mar 15 23:37:31.787: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.774629ms) Mar 15 23:37:31.791: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.500265ms) Mar 15 23:37:31.794: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.404366ms) Mar 15 23:37:31.798: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.747429ms) Mar 15 23:37:31.802: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.173762ms) Mar 15 23:37:31.807: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.235133ms) Mar 15 23:37:31.811: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.065288ms) Mar 15 23:37:31.814: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.28231ms) Mar 15 23:37:31.818: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.416578ms) Mar 15 23:37:31.821: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.227751ms) Mar 15 23:37:31.824: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.234992ms) Mar 15 23:37:31.828: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.557688ms) Mar 15 23:37:31.832: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.843273ms) Mar 15 23:37:31.835: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.088577ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:37:31.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-d7wjp" for this suite. Mar 15 23:37:37.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:37:37.874: INFO: namespace: e2e-tests-proxy-d7wjp, resource: bindings, ignored listing per whitelist Mar 15 23:37:37.932: INFO: namespace e2e-tests-proxy-d7wjp deletion completed in 6.093527693s • [SLOW TEST:6.287 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:37:37.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mlt45 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 23:37:38.288: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 23:38:06.569: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.28:8080/dial?request=hostName&protocol=udp&host=10.244.1.27&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-mlt45 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:38:06.569: INFO: >>> kubeConfig: /root/.kube/config I0315 23:38:06.605601 6 log.go:172] (0xc000f46370) (0xc001c363c0) Create stream I0315 23:38:06.605633 6 log.go:172] (0xc000f46370) (0xc001c363c0) Stream added, broadcasting: 1 I0315 23:38:06.607528 6 log.go:172] (0xc000f46370) Reply frame received for 1 I0315 23:38:06.607578 6 log.go:172] (0xc000f46370) (0xc00241e000) Create stream I0315 23:38:06.607601 6 log.go:172] (0xc000f46370) (0xc00241e000) Stream added, broadcasting: 3 I0315 23:38:06.608631 6 log.go:172] (0xc000f46370) Reply frame received for 3 I0315 23:38:06.608690 6 log.go:172] (0xc000f46370) (0xc001976aa0) Create stream I0315 23:38:06.608707 6 log.go:172] (0xc000f46370) (0xc001976aa0) Stream added, broadcasting: 5 I0315 23:38:06.609900 6 log.go:172] (0xc000f46370) Reply frame received for 5 I0315 23:38:06.699807 6 log.go:172] (0xc000f46370) Data frame received for 3 I0315 23:38:06.699843 6 log.go:172] (0xc00241e000) (3) Data frame handling I0315 23:38:06.699865 6 log.go:172] (0xc00241e000) (3) Data frame sent I0315 23:38:06.700766 6 log.go:172] (0xc000f46370) Data frame received for 5 I0315 23:38:06.700804 6 log.go:172] (0xc001976aa0) (5) Data frame handling I0315 23:38:06.700830 6 log.go:172] (0xc000f46370) Data frame received for 3 I0315 23:38:06.700843 6 log.go:172] (0xc00241e000) (3) Data frame handling I0315 23:38:06.702555 6 log.go:172] (0xc000f46370) Data frame received for 1 I0315 23:38:06.702601 6 log.go:172] (0xc001c363c0) (1) Data frame handling I0315 23:38:06.702630 6 log.go:172] (0xc001c363c0) (1) Data frame sent I0315 23:38:06.702717 6 log.go:172] (0xc000f46370) (0xc001c363c0) Stream removed, broadcasting: 1 I0315 23:38:06.702788 6 log.go:172] (0xc000f46370) Go away received I0315 23:38:06.702893 6 log.go:172] (0xc000f46370) (0xc001c363c0) Stream removed, broadcasting: 1 I0315 23:38:06.702913 6 log.go:172] (0xc000f46370) (0xc00241e000) Stream removed, broadcasting: 3 I0315 23:38:06.702922 6 log.go:172] (0xc000f46370) (0xc001976aa0) Stream removed, broadcasting: 5 Mar 15 23:38:06.702: INFO: Waiting for endpoints: map[] Mar 15 23:38:06.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.28:8080/dial?request=hostName&protocol=udp&host=10.244.2.8&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-mlt45 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:38:06.706: INFO: >>> kubeConfig: /root/.kube/config I0315 23:38:06.741404 6 log.go:172] (0xc000dc3290) (0xc002506280) Create stream I0315 23:38:06.741432 6 log.go:172] (0xc000dc3290) (0xc002506280) Stream added, broadcasting: 1 I0315 23:38:06.743209 6 log.go:172] (0xc000dc3290) Reply frame received for 1 I0315 23:38:06.743238 6 log.go:172] (0xc000dc3290) (0xc001976b40) Create stream I0315 23:38:06.743248 6 log.go:172] (0xc000dc3290) (0xc001976b40) Stream added, broadcasting: 3 I0315 23:38:06.744219 6 log.go:172] (0xc000dc3290) Reply frame received for 3 I0315 23:38:06.744269 6 log.go:172] (0xc000dc3290) (0xc00241e280) Create stream I0315 23:38:06.744286 6 log.go:172] (0xc000dc3290) (0xc00241e280) Stream added, broadcasting: 5 I0315 23:38:06.745491 6 log.go:172] (0xc000dc3290) Reply frame received for 5 I0315 23:38:06.820995 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:38:06.821028 6 log.go:172] (0xc001976b40) (3) Data frame handling I0315 23:38:06.821050 6 log.go:172] (0xc001976b40) (3) Data frame sent I0315 23:38:06.821823 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:38:06.821869 6 log.go:172] (0xc001976b40) (3) Data frame handling I0315 23:38:06.821914 6 log.go:172] (0xc000dc3290) Data frame received for 5 I0315 23:38:06.821938 6 log.go:172] (0xc00241e280) (5) Data frame handling I0315 23:38:06.823400 6 log.go:172] (0xc000dc3290) Data frame received for 1 I0315 23:38:06.823433 6 log.go:172] (0xc002506280) (1) Data frame handling I0315 23:38:06.823463 6 log.go:172] (0xc002506280) (1) Data frame sent I0315 23:38:06.823496 6 log.go:172] (0xc000dc3290) (0xc002506280) Stream removed, broadcasting: 1 I0315 23:38:06.823537 6 log.go:172] (0xc000dc3290) Go away received I0315 23:38:06.823639 6 log.go:172] (0xc000dc3290) (0xc002506280) Stream removed, broadcasting: 1 I0315 23:38:06.823668 6 log.go:172] (0xc000dc3290) (0xc001976b40) Stream removed, broadcasting: 3 I0315 23:38:06.823680 6 log.go:172] (0xc000dc3290) (0xc00241e280) Stream removed, broadcasting: 5 Mar 15 23:38:06.823: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:38:06.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-mlt45" for this suite. Mar 15 23:38:30.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:38:30.875: INFO: namespace: e2e-tests-pod-network-test-mlt45, resource: bindings, ignored listing per whitelist Mar 15 23:38:30.915: INFO: namespace e2e-tests-pod-network-test-mlt45 deletion completed in 24.087585748s • [SLOW TEST:52.983 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:38:30.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 15 23:38:31.078: INFO: Waiting up to 5m0s for pod "pod-140047af-6716-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-488xv" to be "success or failure" Mar 15 23:38:31.098: INFO: Pod "pod-140047af-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 20.259239ms Mar 15 23:38:33.256: INFO: Pod "pod-140047af-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177937627s Mar 15 23:38:35.260: INFO: Pod "pod-140047af-6716-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.182064173s Mar 15 23:38:37.264: INFO: Pod "pod-140047af-6716-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186095486s STEP: Saw pod success Mar 15 23:38:37.264: INFO: Pod "pod-140047af-6716-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:38:37.267: INFO: Trying to get logs from node hunter-worker2 pod pod-140047af-6716-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 23:38:37.310: INFO: Waiting for pod pod-140047af-6716-11ea-811c-0242ac110013 to disappear Mar 15 23:38:37.319: INFO: Pod pod-140047af-6716-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:38:37.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-488xv" for this suite. Mar 15 23:38:43.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:38:43.443: INFO: namespace: e2e-tests-emptydir-488xv, resource: bindings, ignored listing per whitelist Mar 15 23:38:43.468: INFO: namespace e2e-tests-emptydir-488xv deletion completed in 6.145639196s • [SLOW TEST:12.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:38:43.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 15 23:38:43.581: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-85qlr" to be "success or failure" Mar 15 23:38:43.584: INFO: Pod "downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57155ms Mar 15 23:38:45.588: INFO: Pod "downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006626028s Mar 15 23:38:47.591: INFO: Pod "downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010396528s STEP: Saw pod success Mar 15 23:38:47.591: INFO: Pod "downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:38:47.594: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 15 23:38:47.624: INFO: Waiting for pod downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013 to disappear Mar 15 23:38:47.630: INFO: Pod downwardapi-volume-1b70f864-6716-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:38:47.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-85qlr" for this suite. Mar 15 23:38:53.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:38:53.719: INFO: namespace: e2e-tests-downward-api-85qlr, resource: bindings, ignored listing per whitelist Mar 15 23:38:53.738: INFO: namespace e2e-tests-downward-api-85qlr deletion completed in 6.085888734s • [SLOW TEST:10.270 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:38:53.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-21c88659-6716-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:38:54.259: INFO: Waiting up to 5m0s for pod "pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-znrnq" to be "success or failure" Mar 15 23:38:54.301: INFO: Pod "pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 42.606095ms Mar 15 23:38:56.305: INFO: Pod "pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04648679s Mar 15 23:38:58.309: INFO: Pod "pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050587251s STEP: Saw pod success Mar 15 23:38:58.309: INFO: Pod "pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:38:58.312: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 15 23:38:58.337: INFO: Waiting for pod pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013 to disappear Mar 15 23:38:58.343: INFO: Pod pod-configmaps-21cb1363-6716-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:38:58.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-znrnq" for this suite. Mar 15 23:39:04.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:39:04.396: INFO: namespace: e2e-tests-configmap-znrnq, resource: bindings, ignored listing per whitelist Mar 15 23:39:04.441: INFO: namespace e2e-tests-configmap-znrnq deletion completed in 6.095975627s • [SLOW TEST:10.703 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:39:04.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 15 23:39:05.155: INFO: Pod name wrapped-volume-race-2841f23d-6716-11ea-811c-0242ac110013: Found 0 pods out of 5 Mar 15 23:39:10.164: INFO: Pod name wrapped-volume-race-2841f23d-6716-11ea-811c-0242ac110013: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2841f23d-6716-11ea-811c-0242ac110013 in namespace e2e-tests-emptydir-wrapper-zmnkl, will wait for the garbage collector to delete the pods Mar 15 23:41:12.250: INFO: Deleting ReplicationController wrapped-volume-race-2841f23d-6716-11ea-811c-0242ac110013 took: 7.808944ms Mar 15 23:41:12.350: INFO: Terminating ReplicationController wrapped-volume-race-2841f23d-6716-11ea-811c-0242ac110013 pods took: 100.289127ms STEP: Creating RC which spawns configmap-volume pods Mar 15 23:41:51.895: INFO: Pod name wrapped-volume-race-8bac9405-6716-11ea-811c-0242ac110013: Found 0 pods out of 5 Mar 15 23:41:56.902: INFO: Pod name wrapped-volume-race-8bac9405-6716-11ea-811c-0242ac110013: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8bac9405-6716-11ea-811c-0242ac110013 in namespace e2e-tests-emptydir-wrapper-zmnkl, will wait for the garbage collector to delete the pods Mar 15 23:43:41.116: INFO: Deleting ReplicationController wrapped-volume-race-8bac9405-6716-11ea-811c-0242ac110013 took: 116.622353ms Mar 15 23:43:41.416: INFO: Terminating ReplicationController wrapped-volume-race-8bac9405-6716-11ea-811c-0242ac110013 pods took: 300.272734ms STEP: Creating RC which spawns configmap-volume pods Mar 15 23:44:22.362: INFO: Pod name wrapped-volume-race-e55beaa7-6716-11ea-811c-0242ac110013: Found 0 pods out of 5 Mar 15 23:44:27.370: INFO: Pod name wrapped-volume-race-e55beaa7-6716-11ea-811c-0242ac110013: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e55beaa7-6716-11ea-811c-0242ac110013 in namespace e2e-tests-emptydir-wrapper-zmnkl, will wait for the garbage collector to delete the pods Mar 15 23:47:03.452: INFO: Deleting ReplicationController wrapped-volume-race-e55beaa7-6716-11ea-811c-0242ac110013 took: 11.837161ms Mar 15 23:47:03.553: INFO: Terminating ReplicationController wrapped-volume-race-e55beaa7-6716-11ea-811c-0242ac110013 pods took: 100.375956ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:47:41.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-zmnkl" for this suite. Mar 15 23:47:49.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:47:49.869: INFO: namespace: e2e-tests-emptydir-wrapper-zmnkl, resource: bindings, ignored listing per whitelist Mar 15 23:47:49.902: INFO: namespace e2e-tests-emptydir-wrapper-zmnkl deletion completed in 8.089771338s • [SLOW TEST:525.460 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:47:49.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cxgjq STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 23:47:50.015: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 23:48:20.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.12:8080/dial?request=hostName&protocol=http&host=10.244.1.45&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cxgjq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:48:20.102: INFO: >>> kubeConfig: /root/.kube/config I0315 23:48:20.136947 6 log.go:172] (0xc000dc3290) (0xc000e068c0) Create stream I0315 23:48:20.136986 6 log.go:172] (0xc000dc3290) (0xc000e068c0) Stream added, broadcasting: 1 I0315 23:48:20.140794 6 log.go:172] (0xc000dc3290) Reply frame received for 1 I0315 23:48:20.140832 6 log.go:172] (0xc000dc3290) (0xc0019ba640) Create stream I0315 23:48:20.140845 6 log.go:172] (0xc000dc3290) (0xc0019ba640) Stream added, broadcasting: 3 I0315 23:48:20.142476 6 log.go:172] (0xc000dc3290) Reply frame received for 3 I0315 23:48:20.142497 6 log.go:172] (0xc000dc3290) (0xc0019ba6e0) Create stream I0315 23:48:20.142505 6 log.go:172] (0xc000dc3290) (0xc0019ba6e0) Stream added, broadcasting: 5 I0315 23:48:20.143212 6 log.go:172] (0xc000dc3290) Reply frame received for 5 I0315 23:48:20.193598 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:48:20.193638 6 log.go:172] (0xc0019ba640) (3) Data frame handling I0315 23:48:20.193673 6 log.go:172] (0xc0019ba640) (3) Data frame sent I0315 23:48:20.194643 6 log.go:172] (0xc000dc3290) Data frame received for 5 I0315 23:48:20.194671 6 log.go:172] (0xc0019ba6e0) (5) Data frame handling I0315 23:48:20.194818 6 log.go:172] (0xc000dc3290) Data frame received for 3 I0315 23:48:20.194863 6 log.go:172] (0xc0019ba640) (3) Data frame handling I0315 23:48:20.196677 6 log.go:172] (0xc000dc3290) Data frame received for 1 I0315 23:48:20.196719 6 log.go:172] (0xc000e068c0) (1) Data frame handling I0315 23:48:20.196760 6 log.go:172] (0xc000e068c0) (1) Data frame sent I0315 23:48:20.196784 6 log.go:172] (0xc000dc3290) (0xc000e068c0) Stream removed, broadcasting: 1 I0315 23:48:20.196820 6 log.go:172] (0xc000dc3290) Go away received I0315 23:48:20.196987 6 log.go:172] (0xc000dc3290) (0xc000e068c0) Stream removed, broadcasting: 1 I0315 23:48:20.197021 6 log.go:172] (0xc000dc3290) (0xc0019ba640) Stream removed, broadcasting: 3 I0315 23:48:20.197048 6 log.go:172] (0xc000dc3290) (0xc0019ba6e0) Stream removed, broadcasting: 5 Mar 15 23:48:20.197: INFO: Waiting for endpoints: map[] Mar 15 23:48:20.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.12:8080/dial?request=hostName&protocol=http&host=10.244.2.11&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-cxgjq PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:48:20.201: INFO: >>> kubeConfig: /root/.kube/config I0315 23:48:20.233338 6 log.go:172] (0xc000f46420) (0xc000d66e60) Create stream I0315 23:48:20.233375 6 log.go:172] (0xc000f46420) (0xc000d66e60) Stream added, broadcasting: 1 I0315 23:48:20.235798 6 log.go:172] (0xc000f46420) Reply frame received for 1 I0315 23:48:20.235912 6 log.go:172] (0xc000f46420) (0xc0019ba780) Create stream I0315 23:48:20.235927 6 log.go:172] (0xc000f46420) (0xc0019ba780) Stream added, broadcasting: 3 I0315 23:48:20.236829 6 log.go:172] (0xc000f46420) Reply frame received for 3 I0315 23:48:20.236849 6 log.go:172] (0xc000f46420) (0xc0019ba820) Create stream I0315 23:48:20.236856 6 log.go:172] (0xc000f46420) (0xc0019ba820) Stream added, broadcasting: 5 I0315 23:48:20.238085 6 log.go:172] (0xc000f46420) Reply frame received for 5 I0315 23:48:20.307744 6 log.go:172] (0xc000f46420) Data frame received for 3 I0315 23:48:20.307796 6 log.go:172] (0xc0019ba780) (3) Data frame handling I0315 23:48:20.307838 6 log.go:172] (0xc0019ba780) (3) Data frame sent I0315 23:48:20.308220 6 log.go:172] (0xc000f46420) Data frame received for 5 I0315 23:48:20.308268 6 log.go:172] (0xc0019ba820) (5) Data frame handling I0315 23:48:20.308486 6 log.go:172] (0xc000f46420) Data frame received for 3 I0315 23:48:20.308518 6 log.go:172] (0xc0019ba780) (3) Data frame handling I0315 23:48:20.310109 6 log.go:172] (0xc000f46420) Data frame received for 1 I0315 23:48:20.310127 6 log.go:172] (0xc000d66e60) (1) Data frame handling I0315 23:48:20.310140 6 log.go:172] (0xc000d66e60) (1) Data frame sent I0315 23:48:20.310153 6 log.go:172] (0xc000f46420) (0xc000d66e60) Stream removed, broadcasting: 1 I0315 23:48:20.310168 6 log.go:172] (0xc000f46420) Go away received I0315 23:48:20.310363 6 log.go:172] (0xc000f46420) (0xc000d66e60) Stream removed, broadcasting: 1 I0315 23:48:20.310416 6 log.go:172] (0xc000f46420) (0xc0019ba780) Stream removed, broadcasting: 3 I0315 23:48:20.310444 6 log.go:172] (0xc000f46420) (0xc0019ba820) Stream removed, broadcasting: 5 Mar 15 23:48:20.310: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:48:20.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cxgjq" for this suite. Mar 15 23:48:42.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:48:42.383: INFO: namespace: e2e-tests-pod-network-test-cxgjq, resource: bindings, ignored listing per whitelist Mar 15 23:48:42.514: INFO: namespace e2e-tests-pod-network-test-cxgjq deletion completed in 22.199874109s • [SLOW TEST:52.612 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:48:42.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-808124d7-6717-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:48:42.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-l2jhs" to be "success or failure" Mar 15 23:48:42.626: INFO: Pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 7.95549ms Mar 15 23:48:44.631: INFO: Pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012382293s Mar 15 23:48:46.635: INFO: Pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016272547s Mar 15 23:48:48.640: INFO: Pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021764787s STEP: Saw pod success Mar 15 23:48:48.640: INFO: Pod "pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:48:48.645: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 15 23:48:48.727: INFO: Waiting for pod pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013 to disappear Mar 15 23:48:48.766: INFO: Pod pod-configmaps-80818ae6-6717-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:48:48.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l2jhs" for this suite. Mar 15 23:48:56.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:48:56.876: INFO: namespace: e2e-tests-configmap-l2jhs, resource: bindings, ignored listing per whitelist Mar 15 23:48:56.890: INFO: namespace e2e-tests-configmap-l2jhs deletion completed in 8.120548925s • [SLOW TEST:14.376 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:48:56.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 15 23:48:57.014: INFO: Waiting up to 5m0s for pod "pod-8914e02d-6717-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-ddvzv" to be "success or failure" Mar 15 23:48:57.030: INFO: Pod "pod-8914e02d-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 15.616009ms Mar 15 23:48:59.034: INFO: Pod "pod-8914e02d-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019966213s Mar 15 23:49:01.038: INFO: Pod "pod-8914e02d-6717-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.023922243s Mar 15 23:49:03.042: INFO: Pod "pod-8914e02d-6717-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027816923s STEP: Saw pod success Mar 15 23:49:03.042: INFO: Pod "pod-8914e02d-6717-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:49:03.044: INFO: Trying to get logs from node hunter-worker pod pod-8914e02d-6717-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 23:49:03.243: INFO: Waiting for pod pod-8914e02d-6717-11ea-811c-0242ac110013 to disappear Mar 15 23:49:03.291: INFO: Pod pod-8914e02d-6717-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:49:03.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ddvzv" for this suite. Mar 15 23:49:11.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:49:11.438: INFO: namespace: e2e-tests-emptydir-ddvzv, resource: bindings, ignored listing per whitelist Mar 15 23:49:11.486: INFO: namespace e2e-tests-emptydir-ddvzv deletion completed in 8.190367543s • [SLOW TEST:14.595 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:49:11.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-91d93e56-6717-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:49:11.790: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-dpjjs" to be "success or failure" Mar 15 23:49:11.806: INFO: Pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.033191ms Mar 15 23:49:13.822: INFO: Pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032146237s Mar 15 23:49:15.828: INFO: Pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0378752s Mar 15 23:49:17.894: INFO: Pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104357317s STEP: Saw pod success Mar 15 23:49:17.894: INFO: Pod "pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:49:17.897: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 23:49:17.972: INFO: Waiting for pod pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013 to disappear Mar 15 23:49:18.000: INFO: Pod pod-projected-configmaps-91e0ce98-6717-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:49:18.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dpjjs" for this suite. Mar 15 23:49:24.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:49:24.130: INFO: namespace: e2e-tests-projected-dpjjs, resource: bindings, ignored listing per whitelist Mar 15 23:49:24.157: INFO: namespace e2e-tests-projected-dpjjs deletion completed in 6.1544024s • [SLOW TEST:12.671 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:49:24.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 15 23:49:34.586: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:34.763: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:36.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:36.774: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:38.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:38.768: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:40.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:40.872: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:42.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:42.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:44.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:44.768: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:46.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:47.044: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:48.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:48.840: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:50.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:50.787: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:52.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:52.865: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:54.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:54.768: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:56.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:56.769: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:49:58.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:49:58.768: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:50:00.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:50:00.767: INFO: Pod pod-with-poststart-exec-hook still exists Mar 15 23:50:02.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 15 23:50:02.768: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:50:02.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-klqv4" for this suite. Mar 15 23:50:26.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:50:26.853: INFO: namespace: e2e-tests-container-lifecycle-hook-klqv4, resource: bindings, ignored listing per whitelist Mar 15 23:50:26.880: INFO: namespace e2e-tests-container-lifecycle-hook-klqv4 deletion completed in 24.108671467s • [SLOW TEST:62.723 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:50:26.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 15 23:50:27.722: INFO: Waiting up to 5m0s for pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013" in namespace "e2e-tests-containers-s47bw" to be "success or failure" Mar 15 23:50:28.107: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 384.3806ms Mar 15 23:50:30.143: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42061284s Mar 15 23:50:32.267: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544626152s Mar 15 23:50:34.273: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550618755s Mar 15 23:50:36.276: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 8.55407087s Mar 15 23:50:38.280: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.557954027s STEP: Saw pod success Mar 15 23:50:38.280: INFO: Pod "client-containers-bedb3816-6717-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:50:38.283: INFO: Trying to get logs from node hunter-worker pod client-containers-bedb3816-6717-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 15 23:50:38.766: INFO: Waiting for pod client-containers-bedb3816-6717-11ea-811c-0242ac110013 to disappear Mar 15 23:50:38.781: INFO: Pod client-containers-bedb3816-6717-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:50:38.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-s47bw" for this suite. Mar 15 23:50:44.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:50:44.882: INFO: namespace: e2e-tests-containers-s47bw, resource: bindings, ignored listing per whitelist Mar 15 23:50:44.886: INFO: namespace e2e-tests-containers-s47bw deletion completed in 6.086759879s • [SLOW TEST:18.005 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:50:44.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 15 23:50:45.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:48.999: INFO: stderr: "" Mar 15 23:50:48.999: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 23:50:48.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:49.123: INFO: stderr: "" Mar 15 23:50:49.123: INFO: stdout: "update-demo-nautilus-6pkbz update-demo-nautilus-wng9z " Mar 15 23:50:49.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:49.205: INFO: stderr: "" Mar 15 23:50:49.205: INFO: stdout: "" Mar 15 23:50:49.205: INFO: update-demo-nautilus-6pkbz is created but not running Mar 15 23:50:54.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:54.297: INFO: stderr: "" Mar 15 23:50:54.297: INFO: stdout: "update-demo-nautilus-6pkbz update-demo-nautilus-wng9z " Mar 15 23:50:54.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:54.466: INFO: stderr: "" Mar 15 23:50:54.466: INFO: stdout: "true" Mar 15 23:50:54.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:54.564: INFO: stderr: "" Mar 15 23:50:54.564: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:50:54.564: INFO: validating pod update-demo-nautilus-6pkbz Mar 15 23:50:54.567: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:50:54.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:50:54.568: INFO: update-demo-nautilus-6pkbz is verified up and running Mar 15 23:50:54.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wng9z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:54.657: INFO: stderr: "" Mar 15 23:50:54.657: INFO: stdout: "true" Mar 15 23:50:54.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wng9z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:54.745: INFO: stderr: "" Mar 15 23:50:54.745: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:50:54.745: INFO: validating pod update-demo-nautilus-wng9z Mar 15 23:50:54.749: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:50:54.749: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:50:54.749: INFO: update-demo-nautilus-wng9z is verified up and running STEP: scaling down the replication controller Mar 15 23:50:54.751: INFO: scanned /root for discovery docs: Mar 15 23:50:54.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:55.870: INFO: stderr: "" Mar 15 23:50:55.870: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 23:50:55.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:50:55.970: INFO: stderr: "" Mar 15 23:50:55.970: INFO: stdout: "update-demo-nautilus-6pkbz update-demo-nautilus-wng9z " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 15 23:51:00.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:01.294: INFO: stderr: "" Mar 15 23:51:01.294: INFO: stdout: "update-demo-nautilus-6pkbz " Mar 15 23:51:01.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:01.420: INFO: stderr: "" Mar 15 23:51:01.420: INFO: stdout: "true" Mar 15 23:51:01.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:01.506: INFO: stderr: "" Mar 15 23:51:01.506: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:51:01.506: INFO: validating pod update-demo-nautilus-6pkbz Mar 15 23:51:01.509: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:51:01.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:51:01.509: INFO: update-demo-nautilus-6pkbz is verified up and running STEP: scaling up the replication controller Mar 15 23:51:01.512: INFO: scanned /root for discovery docs: Mar 15 23:51:01.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:02.784: INFO: stderr: "" Mar 15 23:51:02.784: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 15 23:51:02.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:02.865: INFO: stderr: "" Mar 15 23:51:02.865: INFO: stdout: "update-demo-nautilus-6pkbz update-demo-nautilus-c8g78 " Mar 15 23:51:02.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:02.951: INFO: stderr: "" Mar 15 23:51:02.952: INFO: stdout: "true" Mar 15 23:51:02.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:03.046: INFO: stderr: "" Mar 15 23:51:03.046: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:51:03.046: INFO: validating pod update-demo-nautilus-6pkbz Mar 15 23:51:03.049: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:51:03.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:51:03.049: INFO: update-demo-nautilus-6pkbz is verified up and running Mar 15 23:51:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8g78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:03.150: INFO: stderr: "" Mar 15 23:51:03.150: INFO: stdout: "" Mar 15 23:51:03.150: INFO: update-demo-nautilus-c8g78 is created but not running Mar 15 23:51:08.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.248: INFO: stderr: "" Mar 15 23:51:08.248: INFO: stdout: "update-demo-nautilus-6pkbz update-demo-nautilus-c8g78 " Mar 15 23:51:08.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.344: INFO: stderr: "" Mar 15 23:51:08.344: INFO: stdout: "true" Mar 15 23:51:08.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6pkbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.438: INFO: stderr: "" Mar 15 23:51:08.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:51:08.438: INFO: validating pod update-demo-nautilus-6pkbz Mar 15 23:51:08.442: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:51:08.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:51:08.442: INFO: update-demo-nautilus-6pkbz is verified up and running Mar 15 23:51:08.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8g78 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.547: INFO: stderr: "" Mar 15 23:51:08.547: INFO: stdout: "true" Mar 15 23:51:08.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c8g78 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.647: INFO: stderr: "" Mar 15 23:51:08.647: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 15 23:51:08.647: INFO: validating pod update-demo-nautilus-c8g78 Mar 15 23:51:08.650: INFO: got data: { "image": "nautilus.jpg" } Mar 15 23:51:08.650: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 15 23:51:08.650: INFO: update-demo-nautilus-c8g78 is verified up and running STEP: using delete to clean up resources Mar 15 23:51:08.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.744: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 15 23:51:08.744: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 15 23:51:08.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-m46z2' Mar 15 23:51:08.844: INFO: stderr: "No resources found.\n" Mar 15 23:51:08.844: INFO: stdout: "" Mar 15 23:51:08.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-m46z2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 15 23:51:08.948: INFO: stderr: "" Mar 15 23:51:08.948: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:51:08.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m46z2" for this suite. Mar 15 23:51:32.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:51:32.995: INFO: namespace: e2e-tests-kubectl-m46z2, resource: bindings, ignored listing per whitelist Mar 15 23:51:33.056: INFO: namespace e2e-tests-kubectl-m46z2 deletion completed in 24.104940958s • [SLOW TEST:48.170 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:51:33.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0315 23:51:44.270620 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 15 23:51:44.270: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:51:44.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mkmph" for this suite. Mar 15 23:51:52.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:51:52.326: INFO: namespace: e2e-tests-gc-mkmph, resource: bindings, ignored listing per whitelist Mar 15 23:51:52.379: INFO: namespace e2e-tests-gc-mkmph deletion completed in 8.106057712s • [SLOW TEST:19.323 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:51:52.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 15 23:51:52.720: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cdgxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-cdgxn/configmaps/e2e-watch-test-watch-closed,UID:f1b782e6-6717-11ea-99e8-0242ac110002,ResourceVersion:54552,Generation:0,CreationTimestamp:2020-03-15 23:51:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 15 23:51:52.720: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cdgxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-cdgxn/configmaps/e2e-watch-test-watch-closed,UID:f1b782e6-6717-11ea-99e8-0242ac110002,ResourceVersion:54553,Generation:0,CreationTimestamp:2020-03-15 23:51:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 15 23:51:52.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cdgxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-cdgxn/configmaps/e2e-watch-test-watch-closed,UID:f1b782e6-6717-11ea-99e8-0242ac110002,ResourceVersion:54554,Generation:0,CreationTimestamp:2020-03-15 23:51:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 15 23:51:52.886: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-cdgxn,SelfLink:/api/v1/namespaces/e2e-tests-watch-cdgxn/configmaps/e2e-watch-test-watch-closed,UID:f1b782e6-6717-11ea-99e8-0242ac110002,ResourceVersion:54555,Generation:0,CreationTimestamp:2020-03-15 23:51:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:51:52.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-cdgxn" for this suite. Mar 15 23:51:59.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:51:59.109: INFO: namespace: e2e-tests-watch-cdgxn, resource: bindings, ignored listing per whitelist Mar 15 23:51:59.175: INFO: namespace e2e-tests-watch-cdgxn deletion completed in 6.228886822s • [SLOW TEST:6.795 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:51:59.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r9m6c STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 15 23:51:59.282: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 15 23:52:23.389: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.21 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r9m6c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:52:23.389: INFO: >>> kubeConfig: /root/.kube/config I0315 23:52:23.421579 6 log.go:172] (0xc001efe2c0) (0xc001bc8b40) Create stream I0315 23:52:23.421625 6 log.go:172] (0xc001efe2c0) (0xc001bc8b40) Stream added, broadcasting: 1 I0315 23:52:23.425531 6 log.go:172] (0xc001efe2c0) Reply frame received for 1 I0315 23:52:23.425579 6 log.go:172] (0xc001efe2c0) (0xc001bc8be0) Create stream I0315 23:52:23.425591 6 log.go:172] (0xc001efe2c0) (0xc001bc8be0) Stream added, broadcasting: 3 I0315 23:52:23.428533 6 log.go:172] (0xc001efe2c0) Reply frame received for 3 I0315 23:52:23.428570 6 log.go:172] (0xc001efe2c0) (0xc001bc8c80) Create stream I0315 23:52:23.428582 6 log.go:172] (0xc001efe2c0) (0xc001bc8c80) Stream added, broadcasting: 5 I0315 23:52:23.429511 6 log.go:172] (0xc001efe2c0) Reply frame received for 5 I0315 23:52:24.494287 6 log.go:172] (0xc001efe2c0) Data frame received for 3 I0315 23:52:24.494362 6 log.go:172] (0xc001bc8be0) (3) Data frame handling I0315 23:52:24.494398 6 log.go:172] (0xc001bc8be0) (3) Data frame sent I0315 23:52:24.494476 6 log.go:172] (0xc001efe2c0) Data frame received for 5 I0315 23:52:24.494511 6 log.go:172] (0xc001bc8c80) (5) Data frame handling I0315 23:52:24.494832 6 log.go:172] (0xc001efe2c0) Data frame received for 3 I0315 23:52:24.494850 6 log.go:172] (0xc001bc8be0) (3) Data frame handling I0315 23:52:24.496624 6 log.go:172] (0xc001efe2c0) Data frame received for 1 I0315 23:52:24.496669 6 log.go:172] (0xc001bc8b40) (1) Data frame handling I0315 23:52:24.496691 6 log.go:172] (0xc001bc8b40) (1) Data frame sent I0315 23:52:24.496709 6 log.go:172] (0xc001efe2c0) (0xc001bc8b40) Stream removed, broadcasting: 1 I0315 23:52:24.496732 6 log.go:172] (0xc001efe2c0) Go away received I0315 23:52:24.496819 6 log.go:172] (0xc001efe2c0) (0xc001bc8b40) Stream removed, broadcasting: 1 I0315 23:52:24.496843 6 log.go:172] (0xc001efe2c0) (0xc001bc8be0) Stream removed, broadcasting: 3 I0315 23:52:24.496857 6 log.go:172] (0xc001efe2c0) (0xc001bc8c80) Stream removed, broadcasting: 5 Mar 15 23:52:24.496: INFO: Found all expected endpoints: [netserver-0] Mar 15 23:52:24.639: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.57 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r9m6c PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 15 23:52:24.639: INFO: >>> kubeConfig: /root/.kube/config I0315 23:52:24.663865 6 log.go:172] (0xc000dc33f0) (0xc0026759a0) Create stream I0315 23:52:24.663893 6 log.go:172] (0xc000dc33f0) (0xc0026759a0) Stream added, broadcasting: 1 I0315 23:52:24.665520 6 log.go:172] (0xc000dc33f0) Reply frame received for 1 I0315 23:52:24.665546 6 log.go:172] (0xc000dc33f0) (0xc001ec9720) Create stream I0315 23:52:24.665554 6 log.go:172] (0xc000dc33f0) (0xc001ec9720) Stream added, broadcasting: 3 I0315 23:52:24.666273 6 log.go:172] (0xc000dc33f0) Reply frame received for 3 I0315 23:52:24.666312 6 log.go:172] (0xc000dc33f0) (0xc002675a40) Create stream I0315 23:52:24.666321 6 log.go:172] (0xc000dc33f0) (0xc002675a40) Stream added, broadcasting: 5 I0315 23:52:24.667068 6 log.go:172] (0xc000dc33f0) Reply frame received for 5 I0315 23:52:25.717847 6 log.go:172] (0xc000dc33f0) Data frame received for 5 I0315 23:52:25.717886 6 log.go:172] (0xc002675a40) (5) Data frame handling I0315 23:52:25.717911 6 log.go:172] (0xc000dc33f0) Data frame received for 3 I0315 23:52:25.717922 6 log.go:172] (0xc001ec9720) (3) Data frame handling I0315 23:52:25.717939 6 log.go:172] (0xc001ec9720) (3) Data frame sent I0315 23:52:25.717950 6 log.go:172] (0xc000dc33f0) Data frame received for 3 I0315 23:52:25.717962 6 log.go:172] (0xc001ec9720) (3) Data frame handling I0315 23:52:25.719214 6 log.go:172] (0xc000dc33f0) Data frame received for 1 I0315 23:52:25.719238 6 log.go:172] (0xc0026759a0) (1) Data frame handling I0315 23:52:25.719248 6 log.go:172] (0xc0026759a0) (1) Data frame sent I0315 23:52:25.719410 6 log.go:172] (0xc000dc33f0) (0xc0026759a0) Stream removed, broadcasting: 1 I0315 23:52:25.719460 6 log.go:172] (0xc000dc33f0) Go away received I0315 23:52:25.719574 6 log.go:172] (0xc000dc33f0) (0xc0026759a0) Stream removed, broadcasting: 1 I0315 23:52:25.719606 6 log.go:172] (0xc000dc33f0) (0xc001ec9720) Stream removed, broadcasting: 3 I0315 23:52:25.719623 6 log.go:172] (0xc000dc33f0) (0xc002675a40) Stream removed, broadcasting: 5 Mar 15 23:52:25.719: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:52:25.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r9m6c" for this suite. Mar 15 23:52:51.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:52:51.845: INFO: namespace: e2e-tests-pod-network-test-r9m6c, resource: bindings, ignored listing per whitelist Mar 15 23:52:51.848: INFO: namespace e2e-tests-pod-network-test-r9m6c deletion completed in 26.124955084s • [SLOW TEST:52.673 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:52:51.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1520796e-6718-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 15 23:52:52.144: INFO: Waiting up to 5m0s for pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-qcxdr" to be "success or failure" Mar 15 23:52:52.199: INFO: Pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 54.687132ms Mar 15 23:52:54.202: INFO: Pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057685498s Mar 15 23:52:56.206: INFO: Pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061636222s Mar 15 23:52:58.210: INFO: Pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065354658s STEP: Saw pod success Mar 15 23:52:58.210: INFO: Pod "pod-secrets-152a59c3-6718-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:52:58.212: INFO: Trying to get logs from node hunter-worker pod pod-secrets-152a59c3-6718-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 15 23:52:58.317: INFO: Waiting for pod pod-secrets-152a59c3-6718-11ea-811c-0242ac110013 to disappear Mar 15 23:52:58.343: INFO: Pod pod-secrets-152a59c3-6718-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:52:58.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qcxdr" for this suite. Mar 15 23:53:04.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:53:04.438: INFO: namespace: e2e-tests-secrets-qcxdr, resource: bindings, ignored listing per whitelist Mar 15 23:53:04.448: INFO: namespace e2e-tests-secrets-qcxdr deletion completed in 6.102360581s STEP: Destroying namespace "e2e-tests-secret-namespace-xbt4z" for this suite. Mar 15 23:53:10.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:53:10.521: INFO: namespace: e2e-tests-secret-namespace-xbt4z, resource: bindings, ignored listing per whitelist Mar 15 23:53:10.572: INFO: namespace e2e-tests-secret-namespace-xbt4z deletion completed in 6.123623693s • [SLOW TEST:18.724 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:53:10.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-xvlj2 Mar 15 23:53:14.681: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-xvlj2 STEP: checking the pod's current state and verifying that restartCount is present Mar 15 23:53:14.684: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:57:16.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xvlj2" for this suite. Mar 15 23:57:23.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:57:23.186: INFO: namespace: e2e-tests-container-probe-xvlj2, resource: bindings, ignored listing per whitelist Mar 15 23:57:23.215: INFO: namespace e2e-tests-container-probe-xvlj2 deletion completed in 6.369931765s • [SLOW TEST:252.643 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:57:23.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:57:29.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mrvpk" for this suite. Mar 15 23:57:35.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:57:35.823: INFO: namespace: e2e-tests-emptydir-wrapper-mrvpk, resource: bindings, ignored listing per whitelist Mar 15 23:57:35.858: INFO: namespace e2e-tests-emptydir-wrapper-mrvpk deletion completed in 6.085112814s • [SLOW TEST:12.643 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:57:35.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-be6f8a77-6718-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 15 23:57:36.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-xnl6x" to be "success or failure" Mar 15 23:57:36.200: INFO: Pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 119.114388ms Mar 15 23:57:38.205: INFO: Pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123977817s Mar 15 23:57:40.434: INFO: Pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.353246502s Mar 15 23:57:42.438: INFO: Pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357692903s STEP: Saw pod success Mar 15 23:57:42.439: INFO: Pod "pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 15 23:57:42.442: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 15 23:57:42.475: INFO: Waiting for pod pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013 to disappear Mar 15 23:57:42.485: INFO: Pod pod-projected-configmaps-be720820-6718-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 15 23:57:42.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xnl6x" for this suite. Mar 15 23:57:48.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 15 23:57:48.565: INFO: namespace: e2e-tests-projected-xnl6x, resource: bindings, ignored listing per whitelist Mar 15 23:57:48.565: INFO: namespace e2e-tests-projected-xnl6x deletion completed in 6.077506371s • [SLOW TEST:12.707 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 15 23:57:48.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7wszt [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7wszt STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7wszt Mar 15 23:57:48.691: INFO: Found 0 stateful pods, waiting for 1 Mar 15 23:57:58.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 15 23:57:58.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:57:58.975: INFO: stderr: "I0315 23:57:58.815182 1974 log.go:172] (0xc00013a840) (0xc00067b360) Create stream\nI0315 23:57:58.815233 1974 log.go:172] (0xc00013a840) (0xc00067b360) Stream added, broadcasting: 1\nI0315 23:57:58.817242 1974 log.go:172] (0xc00013a840) Reply frame received for 1\nI0315 23:57:58.817302 1974 log.go:172] (0xc00013a840) (0xc00075c000) Create stream\nI0315 23:57:58.817327 1974 log.go:172] (0xc00013a840) (0xc00075c000) Stream added, broadcasting: 3\nI0315 23:57:58.818049 1974 log.go:172] (0xc00013a840) Reply frame received for 3\nI0315 23:57:58.818097 1974 log.go:172] (0xc00013a840) (0xc00077e000) Create stream\nI0315 23:57:58.818117 1974 log.go:172] (0xc00013a840) (0xc00077e000) Stream added, broadcasting: 5\nI0315 23:57:58.818809 1974 log.go:172] (0xc00013a840) Reply frame received for 5\nI0315 23:57:58.971126 1974 log.go:172] (0xc00013a840) Data frame received for 5\nI0315 23:57:58.971192 1974 log.go:172] (0xc00013a840) Data frame received for 3\nI0315 23:57:58.971230 1974 log.go:172] (0xc00075c000) (3) Data frame handling\nI0315 23:57:58.971248 1974 log.go:172] (0xc00075c000) (3) Data frame sent\nI0315 23:57:58.971257 1974 log.go:172] (0xc00013a840) Data frame received for 3\nI0315 23:57:58.971265 1974 log.go:172] (0xc00075c000) (3) Data frame handling\nI0315 23:57:58.971294 1974 log.go:172] (0xc00077e000) (5) Data frame handling\nI0315 23:57:58.972616 1974 log.go:172] (0xc00013a840) Data frame received for 1\nI0315 23:57:58.972635 1974 log.go:172] (0xc00067b360) (1) Data frame handling\nI0315 23:57:58.972651 1974 log.go:172] (0xc00067b360) (1) Data frame sent\nI0315 23:57:58.972676 1974 log.go:172] (0xc00013a840) (0xc00067b360) Stream removed, broadcasting: 1\nI0315 23:57:58.972702 1974 log.go:172] (0xc00013a840) Go away received\nI0315 23:57:58.972973 1974 log.go:172] (0xc00013a840) (0xc00067b360) Stream removed, broadcasting: 1\nI0315 23:57:58.973000 1974 log.go:172] (0xc00013a840) (0xc00075c000) Stream removed, broadcasting: 3\nI0315 23:57:58.973013 1974 log.go:172] (0xc00013a840) (0xc00077e000) Stream removed, broadcasting: 5\n" Mar 15 23:57:58.975: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:57:58.976: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:57:58.979: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 15 23:58:08.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:58:08.984: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:58:09.135: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999485s Mar 15 23:58:10.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.858988011s Mar 15 23:58:11.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.854137797s Mar 15 23:58:12.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.849298521s Mar 15 23:58:13.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.845201766s Mar 15 23:58:14.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.840648704s Mar 15 23:58:15.194: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.835893982s Mar 15 23:58:16.199: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.799337778s Mar 15 23:58:17.261: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.794643141s Mar 15 23:58:18.265: INFO: Verifying statefulset ss doesn't scale past 1 for another 732.560302ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7wszt Mar 15 23:58:19.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:58:19.529: INFO: stderr: "I0315 23:58:19.392874 1997 log.go:172] (0xc000154840) (0xc000738640) Create stream\nI0315 23:58:19.392926 1997 log.go:172] (0xc000154840) (0xc000738640) Stream added, broadcasting: 1\nI0315 23:58:19.395043 1997 log.go:172] (0xc000154840) Reply frame received for 1\nI0315 23:58:19.395086 1997 log.go:172] (0xc000154840) (0xc0005c8f00) Create stream\nI0315 23:58:19.395100 1997 log.go:172] (0xc000154840) (0xc0005c8f00) Stream added, broadcasting: 3\nI0315 23:58:19.395948 1997 log.go:172] (0xc000154840) Reply frame received for 3\nI0315 23:58:19.395996 1997 log.go:172] (0xc000154840) (0xc0005c6000) Create stream\nI0315 23:58:19.396010 1997 log.go:172] (0xc000154840) (0xc0005c6000) Stream added, broadcasting: 5\nI0315 23:58:19.396809 1997 log.go:172] (0xc000154840) Reply frame received for 5\nI0315 23:58:19.523499 1997 log.go:172] (0xc000154840) Data frame received for 3\nI0315 23:58:19.523554 1997 log.go:172] (0xc0005c8f00) (3) Data frame handling\nI0315 23:58:19.523570 1997 log.go:172] (0xc0005c8f00) (3) Data frame sent\nI0315 23:58:19.523588 1997 log.go:172] (0xc000154840) Data frame received for 3\nI0315 23:58:19.523602 1997 log.go:172] (0xc0005c8f00) (3) Data frame handling\nI0315 23:58:19.523663 1997 log.go:172] (0xc000154840) Data frame received for 5\nI0315 23:58:19.523712 1997 log.go:172] (0xc0005c6000) (5) Data frame handling\nI0315 23:58:19.525557 1997 log.go:172] (0xc000154840) Data frame received for 1\nI0315 23:58:19.525590 1997 log.go:172] (0xc000738640) (1) Data frame handling\nI0315 23:58:19.525613 1997 log.go:172] (0xc000738640) (1) Data frame sent\nI0315 23:58:19.525636 1997 log.go:172] (0xc000154840) (0xc000738640) Stream removed, broadcasting: 1\nI0315 23:58:19.525669 1997 log.go:172] (0xc000154840) Go away received\nI0315 23:58:19.525981 1997 log.go:172] (0xc000154840) (0xc000738640) Stream removed, broadcasting: 1\nI0315 23:58:19.526017 1997 log.go:172] (0xc000154840) (0xc0005c8f00) Stream removed, broadcasting: 3\nI0315 23:58:19.526045 1997 log.go:172] (0xc000154840) (0xc0005c6000) Stream removed, broadcasting: 5\n" Mar 15 23:58:19.529: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:58:19.529: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:58:19.533: INFO: Found 1 stateful pods, waiting for 3 Mar 15 23:58:29.538: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:58:29.538: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 15 23:58:29.538: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 15 23:58:29.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:58:29.822: INFO: stderr: "I0315 23:58:29.679914 2019 log.go:172] (0xc00014c630) (0xc00072a640) Create stream\nI0315 23:58:29.679972 2019 log.go:172] (0xc00014c630) (0xc00072a640) Stream added, broadcasting: 1\nI0315 23:58:29.681908 2019 log.go:172] (0xc00014c630) Reply frame received for 1\nI0315 23:58:29.681947 2019 log.go:172] (0xc00014c630) (0xc00072a6e0) Create stream\nI0315 23:58:29.681954 2019 log.go:172] (0xc00014c630) (0xc00072a6e0) Stream added, broadcasting: 3\nI0315 23:58:29.682690 2019 log.go:172] (0xc00014c630) Reply frame received for 3\nI0315 23:58:29.682716 2019 log.go:172] (0xc00014c630) (0xc00072a780) Create stream\nI0315 23:58:29.682723 2019 log.go:172] (0xc00014c630) (0xc00072a780) Stream added, broadcasting: 5\nI0315 23:58:29.683638 2019 log.go:172] (0xc00014c630) Reply frame received for 5\nI0315 23:58:29.815960 2019 log.go:172] (0xc00014c630) Data frame received for 3\nI0315 23:58:29.815995 2019 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0315 23:58:29.816008 2019 log.go:172] (0xc00072a6e0) (3) Data frame sent\nI0315 23:58:29.816016 2019 log.go:172] (0xc00014c630) Data frame received for 3\nI0315 23:58:29.816022 2019 log.go:172] (0xc00072a6e0) (3) Data frame handling\nI0315 23:58:29.816510 2019 log.go:172] (0xc00014c630) Data frame received for 5\nI0315 23:58:29.816526 2019 log.go:172] (0xc00072a780) (5) Data frame handling\nI0315 23:58:29.817833 2019 log.go:172] (0xc00014c630) Data frame received for 1\nI0315 23:58:29.817938 2019 log.go:172] (0xc00072a640) (1) Data frame handling\nI0315 23:58:29.818057 2019 log.go:172] (0xc00072a640) (1) Data frame sent\nI0315 23:58:29.818084 2019 log.go:172] (0xc00014c630) (0xc00072a640) Stream removed, broadcasting: 1\nI0315 23:58:29.818102 2019 log.go:172] (0xc00014c630) Go away received\nI0315 23:58:29.818483 2019 log.go:172] (0xc00014c630) (0xc00072a640) Stream removed, broadcasting: 1\nI0315 23:58:29.818502 2019 log.go:172] (0xc00014c630) (0xc00072a6e0) Stream removed, broadcasting: 3\nI0315 23:58:29.818513 2019 log.go:172] (0xc00014c630) (0xc00072a780) Stream removed, broadcasting: 5\n" Mar 15 23:58:29.823: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:58:29.823: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:58:29.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:58:30.808: INFO: stderr: "I0315 23:58:30.490051 2040 log.go:172] (0xc00014c580) (0xc000589400) Create stream\nI0315 23:58:30.490125 2040 log.go:172] (0xc00014c580) (0xc000589400) Stream added, broadcasting: 1\nI0315 23:58:30.493374 2040 log.go:172] (0xc00014c580) Reply frame received for 1\nI0315 23:58:30.493419 2040 log.go:172] (0xc00014c580) (0xc00033c280) Create stream\nI0315 23:58:30.493428 2040 log.go:172] (0xc00014c580) (0xc00033c280) Stream added, broadcasting: 3\nI0315 23:58:30.494418 2040 log.go:172] (0xc00014c580) Reply frame received for 3\nI0315 23:58:30.494451 2040 log.go:172] (0xc00014c580) (0xc0005894a0) Create stream\nI0315 23:58:30.494470 2040 log.go:172] (0xc00014c580) (0xc0005894a0) Stream added, broadcasting: 5\nI0315 23:58:30.495305 2040 log.go:172] (0xc00014c580) Reply frame received for 5\nI0315 23:58:30.801931 2040 log.go:172] (0xc00014c580) Data frame received for 3\nI0315 23:58:30.801966 2040 log.go:172] (0xc00033c280) (3) Data frame handling\nI0315 23:58:30.801991 2040 log.go:172] (0xc00033c280) (3) Data frame sent\nI0315 23:58:30.802732 2040 log.go:172] (0xc00014c580) Data frame received for 5\nI0315 23:58:30.802744 2040 log.go:172] (0xc0005894a0) (5) Data frame handling\nI0315 23:58:30.803114 2040 log.go:172] (0xc00014c580) Data frame received for 3\nI0315 23:58:30.803134 2040 log.go:172] (0xc00033c280) (3) Data frame handling\nI0315 23:58:30.805072 2040 log.go:172] (0xc00014c580) Data frame received for 1\nI0315 23:58:30.805089 2040 log.go:172] (0xc000589400) (1) Data frame handling\nI0315 23:58:30.805194 2040 log.go:172] (0xc000589400) (1) Data frame sent\nI0315 23:58:30.805217 2040 log.go:172] (0xc00014c580) (0xc000589400) Stream removed, broadcasting: 1\nI0315 23:58:30.805374 2040 log.go:172] (0xc00014c580) Go away received\nI0315 23:58:30.805408 2040 log.go:172] (0xc00014c580) (0xc000589400) Stream removed, broadcasting: 1\nI0315 23:58:30.805423 2040 log.go:172] (0xc00014c580) (0xc00033c280) Stream removed, broadcasting: 3\nI0315 23:58:30.805434 2040 log.go:172] (0xc00014c580) (0xc0005894a0) Stream removed, broadcasting: 5\n" Mar 15 23:58:30.808: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:58:30.808: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:58:30.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 15 23:58:32.179: INFO: stderr: "I0315 23:58:31.354189 2062 log.go:172] (0xc000138790) (0xc000563720) Create stream\nI0315 23:58:31.354258 2062 log.go:172] (0xc000138790) (0xc000563720) Stream added, broadcasting: 1\nI0315 23:58:31.356639 2062 log.go:172] (0xc000138790) Reply frame received for 1\nI0315 23:58:31.356673 2062 log.go:172] (0xc000138790) (0xc0006b2000) Create stream\nI0315 23:58:31.356685 2062 log.go:172] (0xc000138790) (0xc0006b2000) Stream added, broadcasting: 3\nI0315 23:58:31.357624 2062 log.go:172] (0xc000138790) Reply frame received for 3\nI0315 23:58:31.357660 2062 log.go:172] (0xc000138790) (0xc0005637c0) Create stream\nI0315 23:58:31.357723 2062 log.go:172] (0xc000138790) (0xc0005637c0) Stream added, broadcasting: 5\nI0315 23:58:31.358512 2062 log.go:172] (0xc000138790) Reply frame received for 5\nI0315 23:58:32.174826 2062 log.go:172] (0xc000138790) Data frame received for 5\nI0315 23:58:32.174869 2062 log.go:172] (0xc0005637c0) (5) Data frame handling\nI0315 23:58:32.174889 2062 log.go:172] (0xc000138790) Data frame received for 3\nI0315 23:58:32.174894 2062 log.go:172] (0xc0006b2000) (3) Data frame handling\nI0315 23:58:32.174903 2062 log.go:172] (0xc0006b2000) (3) Data frame sent\nI0315 23:58:32.174908 2062 log.go:172] (0xc000138790) Data frame received for 3\nI0315 23:58:32.174911 2062 log.go:172] (0xc0006b2000) (3) Data frame handling\nI0315 23:58:32.175808 2062 log.go:172] (0xc000138790) Data frame received for 1\nI0315 23:58:32.175825 2062 log.go:172] (0xc000563720) (1) Data frame handling\nI0315 23:58:32.175836 2062 log.go:172] (0xc000563720) (1) Data frame sent\nI0315 23:58:32.175848 2062 log.go:172] (0xc000138790) (0xc000563720) Stream removed, broadcasting: 1\nI0315 23:58:32.175867 2062 log.go:172] (0xc000138790) Go away received\nI0315 23:58:32.176047 2062 log.go:172] (0xc000138790) (0xc000563720) Stream removed, broadcasting: 1\nI0315 23:58:32.176069 2062 log.go:172] (0xc000138790) (0xc0006b2000) Stream removed, broadcasting: 3\nI0315 23:58:32.176080 2062 log.go:172] (0xc000138790) (0xc0005637c0) Stream removed, broadcasting: 5\n" Mar 15 23:58:32.179: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 15 23:58:32.179: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 15 23:58:32.179: INFO: Waiting for statefulset status.replicas updated to 0 Mar 15 23:58:32.489: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 15 23:58:42.759: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:58:42.759: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:58:42.759: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 15 23:58:42.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999723s Mar 15 23:58:43.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.961838295s Mar 15 23:58:44.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.95745348s Mar 15 23:58:46.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.828525423s Mar 15 23:58:47.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.552957187s Mar 15 23:58:48.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.355623944s Mar 15 23:58:49.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.085018008s Mar 15 23:58:50.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.034462311s Mar 15 23:58:51.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 924.548165ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7wszt Mar 15 23:58:52.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:58:53.468: INFO: stderr: "I0315 23:58:53.368019 2084 log.go:172] (0xc00075c0b0) (0xc0005d6280) Create stream\nI0315 23:58:53.368073 2084 log.go:172] (0xc00075c0b0) (0xc0005d6280) Stream added, broadcasting: 1\nI0315 23:58:53.370907 2084 log.go:172] (0xc00075c0b0) Reply frame received for 1\nI0315 23:58:53.370954 2084 log.go:172] (0xc00075c0b0) (0xc000428be0) Create stream\nI0315 23:58:53.370963 2084 log.go:172] (0xc00075c0b0) (0xc000428be0) Stream added, broadcasting: 3\nI0315 23:58:53.371917 2084 log.go:172] (0xc00075c0b0) Reply frame received for 3\nI0315 23:58:53.371971 2084 log.go:172] (0xc00075c0b0) (0xc0006a0000) Create stream\nI0315 23:58:53.371988 2084 log.go:172] (0xc00075c0b0) (0xc0006a0000) Stream added, broadcasting: 5\nI0315 23:58:53.372848 2084 log.go:172] (0xc00075c0b0) Reply frame received for 5\nI0315 23:58:53.462161 2084 log.go:172] (0xc00075c0b0) Data frame received for 3\nI0315 23:58:53.462305 2084 log.go:172] (0xc000428be0) (3) Data frame handling\nI0315 23:58:53.462396 2084 log.go:172] (0xc000428be0) (3) Data frame sent\nI0315 23:58:53.462423 2084 log.go:172] (0xc00075c0b0) Data frame received for 3\nI0315 23:58:53.462438 2084 log.go:172] (0xc000428be0) (3) Data frame handling\nI0315 23:58:53.462487 2084 log.go:172] (0xc00075c0b0) Data frame received for 5\nI0315 23:58:53.462559 2084 log.go:172] (0xc0006a0000) (5) Data frame handling\nI0315 23:58:53.464675 2084 log.go:172] (0xc00075c0b0) Data frame received for 1\nI0315 23:58:53.464711 2084 log.go:172] (0xc0005d6280) (1) Data frame handling\nI0315 23:58:53.464728 2084 log.go:172] (0xc0005d6280) (1) Data frame sent\nI0315 23:58:53.464750 2084 log.go:172] (0xc00075c0b0) (0xc0005d6280) Stream removed, broadcasting: 1\nI0315 23:58:53.464785 2084 log.go:172] (0xc00075c0b0) Go away received\nI0315 23:58:53.464932 2084 log.go:172] (0xc00075c0b0) (0xc0005d6280) Stream removed, broadcasting: 1\nI0315 23:58:53.464951 2084 log.go:172] (0xc00075c0b0) (0xc000428be0) Stream removed, broadcasting: 3\nI0315 23:58:53.464962 2084 log.go:172] (0xc00075c0b0) (0xc0006a0000) Stream removed, broadcasting: 5\n" Mar 15 23:58:53.468: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:58:53.468: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:58:53.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:58:54.282: INFO: stderr: "I0315 23:58:54.186923 2106 log.go:172] (0xc00014c6e0) (0xc0002cb400) Create stream\nI0315 23:58:54.187004 2106 log.go:172] (0xc00014c6e0) (0xc0002cb400) Stream added, broadcasting: 1\nI0315 23:58:54.189894 2106 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0315 23:58:54.190002 2106 log.go:172] (0xc00014c6e0) (0xc0005ac000) Create stream\nI0315 23:58:54.190113 2106 log.go:172] (0xc00014c6e0) (0xc0005ac000) Stream added, broadcasting: 3\nI0315 23:58:54.191018 2106 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0315 23:58:54.191067 2106 log.go:172] (0xc00014c6e0) (0xc0004a4000) Create stream\nI0315 23:58:54.191087 2106 log.go:172] (0xc00014c6e0) (0xc0004a4000) Stream added, broadcasting: 5\nI0315 23:58:54.191807 2106 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0315 23:58:54.278631 2106 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0315 23:58:54.278681 2106 log.go:172] (0xc0005ac000) (3) Data frame handling\nI0315 23:58:54.278711 2106 log.go:172] (0xc0005ac000) (3) Data frame sent\nI0315 23:58:54.278724 2106 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0315 23:58:54.278734 2106 log.go:172] (0xc0005ac000) (3) Data frame handling\nI0315 23:58:54.278878 2106 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0315 23:58:54.278915 2106 log.go:172] (0xc0004a4000) (5) Data frame handling\nI0315 23:58:54.280043 2106 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0315 23:58:54.280085 2106 log.go:172] (0xc0002cb400) (1) Data frame handling\nI0315 23:58:54.280114 2106 log.go:172] (0xc0002cb400) (1) Data frame sent\nI0315 23:58:54.280148 2106 log.go:172] (0xc00014c6e0) (0xc0002cb400) Stream removed, broadcasting: 1\nI0315 23:58:54.280188 2106 log.go:172] (0xc00014c6e0) Go away received\nI0315 23:58:54.280306 2106 log.go:172] (0xc00014c6e0) (0xc0002cb400) Stream removed, broadcasting: 1\nI0315 23:58:54.280321 2106 log.go:172] (0xc00014c6e0) (0xc0005ac000) Stream removed, broadcasting: 3\nI0315 23:58:54.280327 2106 log.go:172] (0xc00014c6e0) (0xc0004a4000) Stream removed, broadcasting: 5\n" Mar 15 23:58:54.282: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 15 23:58:54.282: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 15 23:58:54.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:58:54.680: INFO: rc: 137 Mar 15 23:58:54.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0315 23:58:54.597844 2127 log.go:172] (0xc000138580) (0xc000671220) Create stream I0315 23:58:54.597890 2127 log.go:172] (0xc000138580) (0xc000671220) Stream added, broadcasting: 1 I0315 23:58:54.599267 2127 log.go:172] (0xc000138580) Reply frame received for 1 I0315 23:58:54.599292 2127 log.go:172] (0xc000138580) (0xc0005e0000) Create stream I0315 23:58:54.599298 2127 log.go:172] (0xc000138580) (0xc0005e0000) Stream added, broadcasting: 3 I0315 23:58:54.599820 2127 log.go:172] (0xc000138580) Reply frame received for 3 I0315 23:58:54.599861 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Create stream I0315 23:58:54.599871 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Stream added, broadcasting: 5 I0315 23:58:54.600358 2127 log.go:172] (0xc000138580) Reply frame received for 5 I0315 23:58:54.673761 2127 log.go:172] (0xc000138580) Data frame received for 3 I0315 23:58:54.673793 2127 log.go:172] (0xc0005e0000) (3) Data frame handling I0315 23:58:54.674047 2127 log.go:172] (0xc000138580) Data frame received for 5 I0315 23:58:54.674058 2127 log.go:172] (0xc0005e00a0) (5) Data frame handling I0315 23:58:54.676556 2127 log.go:172] (0xc000138580) Data frame received for 1 I0315 23:58:54.676574 2127 log.go:172] (0xc000671220) (1) Data frame handling I0315 23:58:54.676597 2127 log.go:172] (0xc000671220) (1) Data frame sent I0315 23:58:54.676619 2127 log.go:172] (0xc000138580) (0xc000671220) Stream removed, broadcasting: 1 I0315 23:58:54.676653 2127 log.go:172] (0xc000138580) Go away received I0315 23:58:54.677053 2127 log.go:172] (0xc000138580) (0xc000671220) Stream removed, broadcasting: 1 I0315 23:58:54.677089 2127 log.go:172] (0xc000138580) (0xc0005e0000) Stream removed, broadcasting: 3 I0315 23:58:54.677255 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Stream removed, broadcasting: 5 command terminated with exit code 137 [] 0xc001ea4480 exit status 137 true [0xc0017fa5d0 0xc0017fa5e8 0xc0017fa600] [0xc0017fa5d0 0xc0017fa5e8 0xc0017fa600] [0xc0017fa5e0 0xc0017fa5f8] [0x9355a0 0x9355a0] 0xc0021fcae0 }: Command stdout: stderr: I0315 23:58:54.597844 2127 log.go:172] (0xc000138580) (0xc000671220) Create stream I0315 23:58:54.597890 2127 log.go:172] (0xc000138580) (0xc000671220) Stream added, broadcasting: 1 I0315 23:58:54.599267 2127 log.go:172] (0xc000138580) Reply frame received for 1 I0315 23:58:54.599292 2127 log.go:172] (0xc000138580) (0xc0005e0000) Create stream I0315 23:58:54.599298 2127 log.go:172] (0xc000138580) (0xc0005e0000) Stream added, broadcasting: 3 I0315 23:58:54.599820 2127 log.go:172] (0xc000138580) Reply frame received for 3 I0315 23:58:54.599861 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Create stream I0315 23:58:54.599871 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Stream added, broadcasting: 5 I0315 23:58:54.600358 2127 log.go:172] (0xc000138580) Reply frame received for 5 I0315 23:58:54.673761 2127 log.go:172] (0xc000138580) Data frame received for 3 I0315 23:58:54.673793 2127 log.go:172] (0xc0005e0000) (3) Data frame handling I0315 23:58:54.674047 2127 log.go:172] (0xc000138580) Data frame received for 5 I0315 23:58:54.674058 2127 log.go:172] (0xc0005e00a0) (5) Data frame handling I0315 23:58:54.676556 2127 log.go:172] (0xc000138580) Data frame received for 1 I0315 23:58:54.676574 2127 log.go:172] (0xc000671220) (1) Data frame handling I0315 23:58:54.676597 2127 log.go:172] (0xc000671220) (1) Data frame sent I0315 23:58:54.676619 2127 log.go:172] (0xc000138580) (0xc000671220) Stream removed, broadcasting: 1 I0315 23:58:54.676653 2127 log.go:172] (0xc000138580) Go away received I0315 23:58:54.677053 2127 log.go:172] (0xc000138580) (0xc000671220) Stream removed, broadcasting: 1 I0315 23:58:54.677089 2127 log.go:172] (0xc000138580) (0xc0005e0000) Stream removed, broadcasting: 3 I0315 23:58:54.677255 2127 log.go:172] (0xc000138580) (0xc0005e00a0) Stream removed, broadcasting: 5 command terminated with exit code 137 error: exit status 137 Mar 15 23:59:04.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:04.773: INFO: rc: 1 Mar 15 23:59:04.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000f38210 exit status 1 true [0xc001658318 0xc001658330 0xc001658348] [0xc001658318 0xc001658330 0xc001658348] [0xc001658328 0xc001658340] [0x9355a0 0x9355a0] 0xc0020b1980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 15 23:59:14.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:14.865: INFO: rc: 1 Mar 15 23:59:14.866: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a4120 exit status 1 true [0xc000e50088 0xc000e500d8 0xc000e50140] [0xc000e50088 0xc000e500d8 0xc000e50140] [0xc000e500c8 0xc000e500f0] [0x9355a0 0x9355a0] 0xc001cea1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 15 23:59:24.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:25.115: INFO: rc: 1 Mar 15 23:59:25.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4120 exit status 1 true [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304df0 0xc000305338] [0x9355a0 0x9355a0] 0xc001506a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 15 23:59:35.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:35.212: INFO: rc: 1 Mar 15 23:59:35.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4240 exit status 1 true [0xc000305448 0xc000305500 0xc000305670] [0xc000305448 0xc000305500 0xc000305670] [0xc0003054a0 0xc0003055d8] [0x9355a0 0x9355a0] 0xc001506d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 15 23:59:45.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:45.297: INFO: rc: 1 Mar 15 23:59:45.297: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f4120 exit status 1 true [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa010 0xc0017fa030] [0x9355a0 0x9355a0] 0xc0024f4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 15 23:59:55.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 15 23:59:55.389: INFO: rc: 1 Mar 15 23:59:55.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f4270 exit status 1 true [0xc0017fa040 0xc0017fa058 0xc0017fa070] [0xc0017fa040 0xc0017fa058 0xc0017fa070] [0xc0017fa050 0xc0017fa068] [0x9355a0 0x9355a0] 0xc0024f52c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:05.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:05.470: INFO: rc: 1 Mar 16 00:00:05.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a42d0 exit status 1 true [0xc000e50178 0xc000e50200 0xc000e50298] [0xc000e50178 0xc000e50200 0xc000e50298] [0xc000e501d0 0xc000e50260] [0x9355a0 0x9355a0] 0xc001cea480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:15.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:15.555: INFO: rc: 1 Mar 16 00:00:15.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0018941b0 exit status 1 true [0xc00198c000 0xc00198c018 0xc00198c030] [0xc00198c000 0xc00198c018 0xc00198c030] [0xc00198c010 0xc00198c028] [0x9355a0 0x9355a0] 0xc001c781e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:25.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:25.679: INFO: rc: 1 Mar 16 00:00:25.679: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4360 exit status 1 true [0xc0003056d8 0xc000305888 0xc000305910] [0xc0003056d8 0xc000305888 0xc000305910] [0xc000305868 0xc0003058e8] [0x9355a0 0x9355a0] 0xc001507020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:35.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:35.767: INFO: rc: 1 Mar 16 00:00:35.767: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4480 exit status 1 true [0xc000305930 0xc0003059b8 0xc000305b08] [0xc000305930 0xc0003059b8 0xc000305b08] [0xc000305998 0xc000305a98] [0x9355a0 0x9355a0] 0xc0015072c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:45.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:45.986: INFO: rc: 1 Mar 16 00:00:45.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f45a0 exit status 1 true [0xc000305b30 0xc000305c20 0xc000305d08] [0xc000305b30 0xc000305c20 0xc000305d08] [0xc000305bb8 0xc000305cd8] [0x9355a0 0x9355a0] 0xc001507620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:00:55.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:00:56.117: INFO: rc: 1 Mar 16 00:00:56.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001894330 exit status 1 true [0xc00198c038 0xc00198c050 0xc00198c068] [0xc00198c038 0xc00198c050 0xc00198c068] [0xc00198c048 0xc00198c060] [0x9355a0 0x9355a0] 0xc001c78480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:06.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:06.210: INFO: rc: 1 Mar 16 00:01:06.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a4480 exit status 1 true [0xc000e502b0 0xc000e50338 0xc000e503f8] [0xc000e502b0 0xc000e50338 0xc000e503f8] [0xc000e50308 0xc000e503c8] [0x9355a0 0x9355a0] 0xc001cea720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:16.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:16.302: INFO: rc: 1 Mar 16 00:01:16.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a41e0 exit status 1 true [0xc000e50088 0xc000e500d8 0xc000e50140] [0xc000e50088 0xc000e500d8 0xc000e50140] [0xc000e500c8 0xc000e500f0] [0x9355a0 0x9355a0] 0xc001cea1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:26.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:26.382: INFO: rc: 1 Mar 16 00:01:26.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f4150 exit status 1 true [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa010 0xc0017fa030] [0x9355a0 0x9355a0] 0xc0024f4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:36.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:36.473: INFO: rc: 1 Mar 16 00:01:36.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a4390 exit status 1 true [0xc000e50178 0xc000e50200 0xc000e50298] [0xc000e50178 0xc000e50200 0xc000e50298] [0xc000e501d0 0xc000e50260] [0x9355a0 0x9355a0] 0xc001cea480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:46.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:46.585: INFO: rc: 1 Mar 16 00:01:46.586: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a44e0 exit status 1 true [0xc000e502b0 0xc000e50338 0xc000e503f8] [0xc000e502b0 0xc000e50338 0xc000e503f8] [0xc000e50308 0xc000e503c8] [0x9355a0 0x9355a0] 0xc001cea720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:01:56.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:01:56.692: INFO: rc: 1 Mar 16 00:01:56.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f42d0 exit status 1 true [0xc0017fa040 0xc0017fa058 0xc0017fa070] [0xc0017fa040 0xc0017fa058 0xc0017fa070] [0xc0017fa050 0xc0017fa068] [0x9355a0 0x9355a0] 0xc0024f52c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:06.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:06.781: INFO: rc: 1 Mar 16 00:02:06.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a4690 exit status 1 true [0xc000e50458 0xc000e506b0 0xc000e50848] [0xc000e50458 0xc000e506b0 0xc000e50848] [0xc000e505f0 0xc000e50808] [0x9355a0 0x9355a0] 0xc001ceb260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:16.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:16.875: INFO: rc: 1 Mar 16 00:02:16.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a47b0 exit status 1 true [0xc000e508b0 0xc000e509d8 0xc000e50ac0] [0xc000e508b0 0xc000e509d8 0xc000e50ac0] [0xc000e50960 0xc000e50a68] [0x9355a0 0x9355a0] 0xc001ceb500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:26.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:26.970: INFO: rc: 1 Mar 16 00:02:26.970: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017a4900 exit status 1 true [0xc000e50b50 0xc000e50d00 0xc000e50de8] [0xc000e50b50 0xc000e50d00 0xc000e50de8] [0xc000e50c98 0xc000e50dc0] [0x9355a0 0x9355a0] 0xc001ceb7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:36.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:37.053: INFO: rc: 1 Mar 16 00:02:37.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001894150 exit status 1 true [0xc00198c000 0xc00198c018 0xc00198c030] [0xc00198c000 0xc00198c018 0xc00198c030] [0xc00198c010 0xc00198c028] [0x9355a0 0x9355a0] 0xc001c781e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:47.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:47.142: INFO: rc: 1 Mar 16 00:02:47.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f4450 exit status 1 true [0xc0017fa078 0xc0017fa090 0xc0017fa0a8] [0xc0017fa078 0xc0017fa090 0xc0017fa0a8] [0xc0017fa088 0xc0017fa0a0] [0x9355a0 0x9355a0] 0xc0024f5e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:02:57.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:02:57.223: INFO: rc: 1 Mar 16 00:02:57.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4180 exit status 1 true [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304df0 0xc000305338] [0x9355a0 0x9355a0] 0xc001506a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:07.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:07.319: INFO: rc: 1 Mar 16 00:03:07.320: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f45a0 exit status 1 true [0xc0017fa0b0 0xc0017fa0c8 0xc0017fa0e0] [0xc0017fa0b0 0xc0017fa0c8 0xc0017fa0e0] [0xc0017fa0c0 0xc0017fa0d8] [0x9355a0 0x9355a0] 0xc001cf1320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:17.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:17.409: INFO: rc: 1 Mar 16 00:03:17.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001894120 exit status 1 true [0xc00198c008 0xc00198c020 0xc00198c038] [0xc00198c008 0xc00198c020 0xc00198c038] [0xc00198c018 0xc00198c030] [0x9355a0 0x9355a0] 0xc0024f4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:27.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:28.151: INFO: rc: 1 Mar 16 00:03:28.151: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f4120 exit status 1 true [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304cf8 0xc000304f48 0xc000305418] [0xc000304df0 0xc000305338] [0x9355a0 0x9355a0] 0xc001c781e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:38.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:38.263: INFO: rc: 1 Mar 16 00:03:38.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017f4120 exit status 1 true [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa000 0xc0017fa018 0xc0017fa038] [0xc0017fa010 0xc0017fa030] [0x9355a0 0x9355a0] 0xc001506a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:48.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:48.349: INFO: rc: 1 Mar 16 00:03:48.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0023f42d0 exit status 1 true [0xc000305448 0xc000305500 0xc000305670] [0xc000305448 0xc000305500 0xc000305670] [0xc0003054a0 0xc0003055d8] [0x9355a0 0x9355a0] 0xc001c78480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Mar 16 00:03:58.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wszt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:03:58.436: INFO: rc: 1 Mar 16 00:03:58.436: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Mar 16 00:03:58.436: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 16 00:03:58.447: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7wszt Mar 16 00:03:58.449: INFO: Scaling statefulset ss to 0 Mar 16 00:03:58.456: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 00:03:58.458: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:03:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7wszt" for this suite. Mar 16 00:04:07.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:04:08.160: INFO: namespace: e2e-tests-statefulset-7wszt, resource: bindings, ignored listing per whitelist Mar 16 00:04:08.168: INFO: namespace e2e-tests-statefulset-7wszt deletion completed in 9.69120799s • [SLOW TEST:379.603 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:04:08.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 16 00:04:08.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 16 00:04:08.650: INFO: stderr: "" Mar 16 00:04:08.650: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T22:39:48Z\", GoVersion:\"go1.11.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 16 00:04:08.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-srt7l' Mar 16 00:04:19.230: INFO: stderr: "" Mar 16 00:04:19.230: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 16 00:04:19.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-srt7l' Mar 16 00:04:20.293: INFO: stderr: "" Mar 16 00:04:20.293: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 00:04:21.331: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:04:21.331: INFO: Found 0 / 1 Mar 16 00:04:22.314: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:04:22.314: INFO: Found 0 / 1 Mar 16 00:04:23.362: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:04:23.362: INFO: Found 0 / 1 Mar 16 00:04:24.789: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:04:24.789: INFO: Found 1 / 1 Mar 16 00:04:24.789: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 00:04:24.793: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:04:24.793: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 00:04:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-x8725 --namespace=e2e-tests-kubectl-srt7l' Mar 16 00:04:25.017: INFO: stderr: "" Mar 16 00:04:25.017: INFO: stdout: "Name: redis-master-x8725\nNamespace: e2e-tests-kubectl-srt7l\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Mon, 16 Mar 2020 00:04:19 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.25\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://2180b341eb728ac5af24f309be1319ed5285122d2b9e8a8acbdd4394c3153bd4\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Mar 2020 00:04:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5fqlk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5fqlk:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5fqlk\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-srt7l/redis-master-x8725 to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, hunter-worker2 Created container\n Normal Started 3s kubelet, hunter-worker2 Started container\n" Mar 16 00:04:25.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-srt7l' Mar 16 00:04:25.176: INFO: stderr: "" Mar 16 00:04:25.176: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-srt7l\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-x8725\n" Mar 16 00:04:25.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-srt7l' Mar 16 00:04:25.292: INFO: stderr: "" Mar 16 00:04:25.293: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-srt7l\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.108.241.11\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.25:6379\nSession Affinity: None\nEvents: \n" Mar 16 00:04:25.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 16 00:04:25.403: INFO: stderr: "" Mar 16 00:04:25.403: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 16 Mar 2020 00:04:19 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Mar 2020 00:04:19 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Mar 2020 00:04:19 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Mar 2020 00:04:19 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h40m\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 5h41m\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 5h40m\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 5h40m\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h41m\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 5h40m\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h41m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 16 00:04:25.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-srt7l' Mar 16 00:04:25.511: INFO: stderr: "" Mar 16 00:04:25.511: INFO: stdout: "Name: e2e-tests-kubectl-srt7l\nLabels: e2e-framework=kubectl\n e2e-run=6881bcb9-670e-11ea-811c-0242ac110013\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:04:25.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-srt7l" for this suite. Mar 16 00:04:51.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:04:51.738: INFO: namespace: e2e-tests-kubectl-srt7l, resource: bindings, ignored listing per whitelist Mar 16 00:04:51.784: INFO: namespace e2e-tests-kubectl-srt7l deletion completed in 26.26973288s • [SLOW TEST:43.616 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:04:51.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 16 00:04:52.269: INFO: Waiting up to 5m0s for pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-6rwxd" to be "success or failure" Mar 16 00:04:52.277: INFO: Pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 7.971105ms Mar 16 00:04:54.280: INFO: Pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011347917s Mar 16 00:04:56.283: INFO: Pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.014525294s Mar 16 00:04:58.288: INFO: Pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01895648s STEP: Saw pod success Mar 16 00:04:58.288: INFO: Pod "downward-api-c24d1e17-6719-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:04:58.290: INFO: Trying to get logs from node hunter-worker pod downward-api-c24d1e17-6719-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 16 00:04:58.324: INFO: Waiting for pod downward-api-c24d1e17-6719-11ea-811c-0242ac110013 to disappear Mar 16 00:04:58.336: INFO: Pod downward-api-c24d1e17-6719-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:04:58.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6rwxd" for this suite. Mar 16 00:05:04.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:05:04.414: INFO: namespace: e2e-tests-downward-api-6rwxd, resource: bindings, ignored listing per whitelist Mar 16 00:05:04.458: INFO: namespace e2e-tests-downward-api-6rwxd deletion completed in 6.117878594s • [SLOW TEST:12.674 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:05:04.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c9ca6b9f-6719-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:05:04.590: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-mrmw9" to be "success or failure" Mar 16 00:05:04.592: INFO: Pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 1.796519ms Mar 16 00:05:06.653: INFO: Pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063115654s Mar 16 00:05:08.663: INFO: Pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072702849s Mar 16 00:05:10.666: INFO: Pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076011153s STEP: Saw pod success Mar 16 00:05:10.666: INFO: Pod "pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:05:10.668: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013 container projected-secret-volume-test: STEP: delete the pod Mar 16 00:05:10.817: INFO: Waiting for pod pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013 to disappear Mar 16 00:05:10.819: INFO: Pod pod-projected-secrets-c9cd69ff-6719-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:05:10.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mrmw9" for this suite. Mar 16 00:05:17.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:05:17.287: INFO: namespace: e2e-tests-projected-mrmw9, resource: bindings, ignored listing per whitelist Mar 16 00:05:18.073: INFO: namespace e2e-tests-projected-mrmw9 deletion completed in 7.24998076s • [SLOW TEST:13.614 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:05:18.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 16 00:05:18.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:18.607: INFO: stderr: "" Mar 16 00:05:18.607: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 00:05:18.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:18.704: INFO: stderr: "" Mar 16 00:05:18.704: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Mar 16 00:05:23.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:23.810: INFO: stderr: "" Mar 16 00:05:23.810: INFO: stdout: "update-demo-nautilus-5bch5 update-demo-nautilus-bzldc " Mar 16 00:05:23.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bch5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:23.905: INFO: stderr: "" Mar 16 00:05:23.905: INFO: stdout: "true" Mar 16 00:05:23.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bch5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:24.007: INFO: stderr: "" Mar 16 00:05:24.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 00:05:24.007: INFO: validating pod update-demo-nautilus-5bch5 Mar 16 00:05:24.011: INFO: got data: { "image": "nautilus.jpg" } Mar 16 00:05:24.011: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 00:05:24.011: INFO: update-demo-nautilus-5bch5 is verified up and running Mar 16 00:05:24.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzldc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:24.109: INFO: stderr: "" Mar 16 00:05:24.109: INFO: stdout: "true" Mar 16 00:05:24.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzldc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:24.200: INFO: stderr: "" Mar 16 00:05:24.200: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 00:05:24.200: INFO: validating pod update-demo-nautilus-bzldc Mar 16 00:05:24.204: INFO: got data: { "image": "nautilus.jpg" } Mar 16 00:05:24.204: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 00:05:24.204: INFO: update-demo-nautilus-bzldc is verified up and running STEP: rolling-update to new replication controller Mar 16 00:05:24.206: INFO: scanned /root for discovery docs: Mar 16 00:05:24.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:47.592: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 16 00:05:47.592: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 00:05:47.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:47.723: INFO: stderr: "" Mar 16 00:05:47.724: INFO: stdout: "update-demo-kitten-7x978 update-demo-kitten-cxdt4 " Mar 16 00:05:47.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7x978 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:47.853: INFO: stderr: "" Mar 16 00:05:47.854: INFO: stdout: "true" Mar 16 00:05:47.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7x978 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:47.957: INFO: stderr: "" Mar 16 00:05:47.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 00:05:47.958: INFO: validating pod update-demo-kitten-7x978 Mar 16 00:05:47.962: INFO: got data: { "image": "kitten.jpg" } Mar 16 00:05:47.962: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 00:05:47.962: INFO: update-demo-kitten-7x978 is verified up and running Mar 16 00:05:47.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cxdt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:48.063: INFO: stderr: "" Mar 16 00:05:48.063: INFO: stdout: "true" Mar 16 00:05:48.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cxdt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bcmkj' Mar 16 00:05:48.169: INFO: stderr: "" Mar 16 00:05:48.169: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 16 00:05:48.169: INFO: validating pod update-demo-kitten-cxdt4 Mar 16 00:05:48.173: INFO: got data: { "image": "kitten.jpg" } Mar 16 00:05:48.173: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 16 00:05:48.173: INFO: update-demo-kitten-cxdt4 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:05:48.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcmkj" for this suite. Mar 16 00:06:12.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:06:12.218: INFO: namespace: e2e-tests-kubectl-bcmkj, resource: bindings, ignored listing per whitelist Mar 16 00:06:12.268: INFO: namespace e2e-tests-kubectl-bcmkj deletion completed in 24.091929529s • [SLOW TEST:54.195 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:06:12.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-tsbsq/configmap-test-f2320df9-6719-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 16 00:06:12.362: INFO: Waiting up to 5m0s for pod "pod-configmaps-f233b598-6719-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-tsbsq" to be "success or failure" Mar 16 00:06:12.366: INFO: Pod "pod-configmaps-f233b598-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907422ms Mar 16 00:06:14.370: INFO: Pod "pod-configmaps-f233b598-6719-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054883s Mar 16 00:06:16.393: INFO: Pod "pod-configmaps-f233b598-6719-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031119421s STEP: Saw pod success Mar 16 00:06:16.393: INFO: Pod "pod-configmaps-f233b598-6719-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:06:16.395: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f233b598-6719-11ea-811c-0242ac110013 container env-test: STEP: delete the pod Mar 16 00:06:16.415: INFO: Waiting for pod pod-configmaps-f233b598-6719-11ea-811c-0242ac110013 to disappear Mar 16 00:06:16.451: INFO: Pod pod-configmaps-f233b598-6719-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:06:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tsbsq" for this suite. Mar 16 00:06:22.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:06:22.617: INFO: namespace: e2e-tests-configmap-tsbsq, resource: bindings, ignored listing per whitelist Mar 16 00:06:22.695: INFO: namespace e2e-tests-configmap-tsbsq deletion completed in 6.239755551s • [SLOW TEST:10.427 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:06:22.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:06:22.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5qm78" for this suite. Mar 16 00:06:46.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:06:46.312: INFO: namespace: e2e-tests-pods-5qm78, resource: bindings, ignored listing per whitelist Mar 16 00:06:46.360: INFO: namespace e2e-tests-pods-5qm78 deletion completed in 23.534590224s • [SLOW TEST:23.666 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:06:46.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-2chhc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2chhc to expose endpoints map[] Mar 16 00:06:46.782: INFO: Get endpoints failed (26.747127ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 16 00:06:47.786: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2chhc exposes endpoints map[] (1.030332277s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2chhc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2chhc to expose endpoints map[pod1:[100]] Mar 16 00:06:51.978: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2chhc exposes endpoints map[pod1:[100]] (4.186423244s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2chhc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2chhc to expose endpoints map[pod1:[100] pod2:[101]] Mar 16 00:06:56.414: INFO: Unexpected endpoints: found map[0752af63-671a-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (4.433124234s elapsed, will retry) Mar 16 00:06:58.711: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2chhc exposes endpoints map[pod1:[100] pod2:[101]] (6.729729783s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2chhc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2chhc to expose endpoints map[pod2:[101]] Mar 16 00:07:01.131: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2chhc exposes endpoints map[pod2:[101]] (2.41636465s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2chhc STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2chhc to expose endpoints map[] Mar 16 00:07:02.144: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2chhc exposes endpoints map[] (1.008213741s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:07:02.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2chhc" for this suite. Mar 16 00:07:24.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:07:24.281: INFO: namespace: e2e-tests-services-2chhc, resource: bindings, ignored listing per whitelist Mar 16 00:07:24.289: INFO: namespace e2e-tests-services-2chhc deletion completed in 22.117715122s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:37.929 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:07:24.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 16 00:07:28.970: INFO: Successfully updated pod "annotationupdate1d25cbee-671a-11ea-811c-0242ac110013" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:07:31.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7xjgj" for this suite. Mar 16 00:07:53.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:07:53.470: INFO: namespace: e2e-tests-downward-api-7xjgj, resource: bindings, ignored listing per whitelist Mar 16 00:07:53.474: INFO: namespace e2e-tests-downward-api-7xjgj deletion completed in 22.276070653s • [SLOW TEST:29.184 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:07:53.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 00:07:53.585: INFO: Waiting up to 5m0s for pod "pod-2e898823-671a-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-xk9bk" to be "success or failure" Mar 16 00:07:53.639: INFO: Pod "pod-2e898823-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 53.718644ms Mar 16 00:07:55.643: INFO: Pod "pod-2e898823-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057918107s Mar 16 00:07:57.718: INFO: Pod "pod-2e898823-671a-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.132067573s Mar 16 00:07:59.721: INFO: Pod "pod-2e898823-671a-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135290676s STEP: Saw pod success Mar 16 00:07:59.721: INFO: Pod "pod-2e898823-671a-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:07:59.723: INFO: Trying to get logs from node hunter-worker2 pod pod-2e898823-671a-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:07:59.861: INFO: Waiting for pod pod-2e898823-671a-11ea-811c-0242ac110013 to disappear Mar 16 00:07:59.877: INFO: Pod pod-2e898823-671a-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:07:59.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xk9bk" for this suite. Mar 16 00:08:05.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:08:05.983: INFO: namespace: e2e-tests-emptydir-xk9bk, resource: bindings, ignored listing per whitelist Mar 16 00:08:05.986: INFO: namespace e2e-tests-emptydir-xk9bk deletion completed in 6.105919318s • [SLOW TEST:12.512 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:08:05.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-35fbba24-671a-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:08:06.095: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-pvd2z" to be "success or failure" Mar 16 00:08:06.100: INFO: Pod "pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 5.047826ms Mar 16 00:08:08.104: INFO: Pod "pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009361673s Mar 16 00:08:10.108: INFO: Pod "pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013451062s STEP: Saw pod success Mar 16 00:08:10.108: INFO: Pod "pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:08:10.112: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013 container projected-secret-volume-test: STEP: delete the pod Mar 16 00:08:10.150: INFO: Waiting for pod pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013 to disappear Mar 16 00:08:10.170: INFO: Pod pod-projected-secrets-35fe0d70-671a-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:08:10.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pvd2z" for this suite. Mar 16 00:08:16.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:08:16.252: INFO: namespace: e2e-tests-projected-pvd2z, resource: bindings, ignored listing per whitelist Mar 16 00:08:16.268: INFO: namespace e2e-tests-projected-pvd2z deletion completed in 6.094181048s • [SLOW TEST:10.281 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:08:16.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 16 00:08:16.382: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 16 00:08:21.405: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 00:08:21.405: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 16 00:08:21.446: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vc22q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vc22q/deployments/test-cleanup-deployment,UID:3f2010a3-671a-11ea-99e8-0242ac110002,ResourceVersion:57212,Generation:1,CreationTimestamp:2020-03-16 00:08:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 16 00:08:21.532: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 16 00:08:21.532: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 16 00:08:21.533: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-vc22q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vc22q/replicasets/test-cleanup-controller,UID:3c1ce501-671a-11ea-99e8-0242ac110002,ResourceVersion:57213,Generation:1,CreationTimestamp:2020-03-16 00:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3f2010a3-671a-11ea-99e8-0242ac110002 0xc001a80987 0xc001a80988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 16 00:08:21.545: INFO: Pod "test-cleanup-controller-2rtxz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-2rtxz,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-vc22q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vc22q/pods/test-cleanup-controller-2rtxz,UID:3c218ab3-671a-11ea-99e8-0242ac110002,ResourceVersion:57205,Generation:0,CreationTimestamp:2020-03-16 00:08:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3c1ce501-671a-11ea-99e8-0242ac110002 0xc00226e997 0xc00226e998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-z29n8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z29n8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z29n8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00226ea10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00226eb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:08:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:08:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:08:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:08:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.68,StartTime:2020-03-16 00:08:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-16 00:08:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4c7b39bbd70ec7b1e18ef93d6207d5667ce1cd8062dc221adcbeb3b38ac62d65}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:08:21.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vc22q" for this suite. Mar 16 00:08:27.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:08:27.798: INFO: namespace: e2e-tests-deployment-vc22q, resource: bindings, ignored listing per whitelist Mar 16 00:08:27.800: INFO: namespace e2e-tests-deployment-vc22q deletion completed in 6.244871882s • [SLOW TEST:11.532 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:08:27.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 16 00:08:27.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 16 00:08:28.069: INFO: stderr: "" Mar 16 00:08:28.069: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:08:28.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2n4t6" for this suite. Mar 16 00:08:34.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:08:34.139: INFO: namespace: e2e-tests-kubectl-2n4t6, resource: bindings, ignored listing per whitelist Mar 16 00:08:34.176: INFO: namespace e2e-tests-kubectl-2n4t6 deletion completed in 6.101862928s • [SLOW TEST:6.375 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:08:34.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 16 00:08:40.304: INFO: 10 pods remaining Mar 16 00:08:40.304: INFO: 10 pods has nil DeletionTimestamp Mar 16 00:08:40.304: INFO: Mar 16 00:08:41.373: INFO: 0 pods remaining Mar 16 00:08:41.374: INFO: 0 pods has nil DeletionTimestamp Mar 16 00:08:41.374: INFO: STEP: Gathering metrics W0316 00:08:42.487843 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 00:08:42.487: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:08:42.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-q4ssn" for this suite. Mar 16 00:08:48.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:08:48.706: INFO: namespace: e2e-tests-gc-q4ssn, resource: bindings, ignored listing per whitelist Mar 16 00:08:48.780: INFO: namespace e2e-tests-gc-q4ssn deletion completed in 6.288761168s • [SLOW TEST:14.604 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:08:48.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:09:19.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-t22c6" for this suite. Mar 16 00:09:25.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:09:25.672: INFO: namespace: e2e-tests-container-runtime-t22c6, resource: bindings, ignored listing per whitelist Mar 16 00:09:25.712: INFO: namespace e2e-tests-container-runtime-t22c6 deletion completed in 6.144112664s • [SLOW TEST:36.932 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:09:25.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-657e46f3-671a-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:09:25.907: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-x4jvz" to be "success or failure" Mar 16 00:09:25.917: INFO: Pod "pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640483ms Mar 16 00:09:27.920: INFO: Pod "pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013405297s Mar 16 00:09:29.925: INFO: Pod "pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017797499s STEP: Saw pod success Mar 16 00:09:29.925: INFO: Pod "pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:09:29.928: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013 container projected-secret-volume-test: STEP: delete the pod Mar 16 00:09:30.020: INFO: Waiting for pod pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013 to disappear Mar 16 00:09:30.030: INFO: Pod pod-projected-secrets-658503c8-671a-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:09:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x4jvz" for this suite. Mar 16 00:09:36.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:09:36.055: INFO: namespace: e2e-tests-projected-x4jvz, resource: bindings, ignored listing per whitelist Mar 16 00:09:36.103: INFO: namespace e2e-tests-projected-x4jvz deletion completed in 6.070213572s • [SLOW TEST:10.391 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:09:36.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:09:36.554: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-fxpn4" to be "success or failure" Mar 16 00:09:36.588: INFO: Pod "downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 34.278237ms Mar 16 00:09:38.590: INFO: Pod "downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036867145s Mar 16 00:09:40.593: INFO: Pod "downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039597543s STEP: Saw pod success Mar 16 00:09:40.593: INFO: Pod "downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:09:40.595: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:09:40.673: INFO: Waiting for pod downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013 to disappear Mar 16 00:09:40.701: INFO: Pod downwardapi-volume-6be73c09-671a-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:09:40.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fxpn4" for this suite. Mar 16 00:09:46.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:09:46.748: INFO: namespace: e2e-tests-projected-fxpn4, resource: bindings, ignored listing per whitelist Mar 16 00:09:46.835: INFO: namespace e2e-tests-projected-fxpn4 deletion completed in 6.131574868s • [SLOW TEST:10.732 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:09:46.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 16 00:09:46.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:47.208: INFO: stderr: "" Mar 16 00:09:47.208: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 00:09:47.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:47.353: INFO: stderr: "" Mar 16 00:09:47.353: INFO: stdout: "update-demo-nautilus-dwpvp update-demo-nautilus-t6hdx " Mar 16 00:09:47.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:47.534: INFO: stderr: "" Mar 16 00:09:47.534: INFO: stdout: "" Mar 16 00:09:47.534: INFO: update-demo-nautilus-dwpvp is created but not running Mar 16 00:09:52.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:52.630: INFO: stderr: "" Mar 16 00:09:52.630: INFO: stdout: "update-demo-nautilus-dwpvp update-demo-nautilus-t6hdx " Mar 16 00:09:52.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpvp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:52.731: INFO: stderr: "" Mar 16 00:09:52.731: INFO: stdout: "true" Mar 16 00:09:52.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dwpvp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:52.821: INFO: stderr: "" Mar 16 00:09:52.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 00:09:52.821: INFO: validating pod update-demo-nautilus-dwpvp Mar 16 00:09:52.824: INFO: got data: { "image": "nautilus.jpg" } Mar 16 00:09:52.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 00:09:52.824: INFO: update-demo-nautilus-dwpvp is verified up and running Mar 16 00:09:52.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6hdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:52.917: INFO: stderr: "" Mar 16 00:09:52.917: INFO: stdout: "true" Mar 16 00:09:52.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6hdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:53.009: INFO: stderr: "" Mar 16 00:09:53.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 00:09:53.009: INFO: validating pod update-demo-nautilus-t6hdx Mar 16 00:09:53.013: INFO: got data: { "image": "nautilus.jpg" } Mar 16 00:09:53.013: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 00:09:53.013: INFO: update-demo-nautilus-t6hdx is verified up and running STEP: using delete to clean up resources Mar 16 00:09:53.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:53.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 00:09:53.152: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 00:09:53.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ds7l6' Mar 16 00:09:53.252: INFO: stderr: "No resources found.\n" Mar 16 00:09:53.252: INFO: stdout: "" Mar 16 00:09:53.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ds7l6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 00:09:53.348: INFO: stderr: "" Mar 16 00:09:53.348: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:09:53.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ds7l6" for this suite. Mar 16 00:10:15.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:10:15.584: INFO: namespace: e2e-tests-kubectl-ds7l6, resource: bindings, ignored listing per whitelist Mar 16 00:10:15.605: INFO: namespace e2e-tests-kubectl-ds7l6 deletion completed in 22.25340081s • [SLOW TEST:28.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:10:15.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:10:15.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-sqcbt" to be "success or failure" Mar 16 00:10:16.013: INFO: Pod "downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 112.299861ms Mar 16 00:10:18.133: INFO: Pod "downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232041595s Mar 16 00:10:20.136: INFO: Pod "downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235220171s STEP: Saw pod success Mar 16 00:10:20.136: INFO: Pod "downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:10:20.139: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:10:20.429: INFO: Waiting for pod downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013 to disappear Mar 16 00:10:20.587: INFO: Pod downwardapi-volume-835c30a4-671a-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:10:20.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sqcbt" for this suite. Mar 16 00:10:26.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:10:26.636: INFO: namespace: e2e-tests-projected-sqcbt, resource: bindings, ignored listing per whitelist Mar 16 00:10:26.690: INFO: namespace e2e-tests-projected-sqcbt deletion completed in 6.099738567s • [SLOW TEST:11.085 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:10:26.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 16 00:10:31.110: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:10:55.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cvpf6" for this suite. Mar 16 00:11:01.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:11:01.500: INFO: namespace: e2e-tests-namespaces-cvpf6, resource: bindings, ignored listing per whitelist Mar 16 00:11:01.525: INFO: namespace e2e-tests-namespaces-cvpf6 deletion completed in 6.107969445s STEP: Destroying namespace "e2e-tests-nsdeletetest-mnktf" for this suite. Mar 16 00:11:01.527: INFO: Namespace e2e-tests-nsdeletetest-mnktf was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-x97cr" for this suite. Mar 16 00:11:07.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:11:07.602: INFO: namespace: e2e-tests-nsdeletetest-x97cr, resource: bindings, ignored listing per whitelist Mar 16 00:11:07.644: INFO: namespace e2e-tests-nsdeletetest-x97cr deletion completed in 6.116743102s • [SLOW TEST:40.954 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:11:07.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:11:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4ckgj" for this suite. Mar 16 00:11:51.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:11:51.963: INFO: namespace: e2e-tests-kubelet-test-4ckgj, resource: bindings, ignored listing per whitelist Mar 16 00:11:51.970: INFO: namespace e2e-tests-kubelet-test-4ckgj deletion completed in 40.092226949s • [SLOW TEST:44.326 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:11:51.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 16 00:11:52.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c799l' Mar 16 00:11:52.336: INFO: stderr: "" Mar 16 00:11:52.336: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 00:11:53.341: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:11:53.341: INFO: Found 0 / 1 Mar 16 00:11:54.428: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:11:54.428: INFO: Found 0 / 1 Mar 16 00:11:55.341: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:11:55.341: INFO: Found 1 / 1 Mar 16 00:11:55.341: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 16 00:11:55.347: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:11:55.347: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 00:11:55.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fm4rf --namespace=e2e-tests-kubectl-c799l -p {"metadata":{"annotations":{"x":"y"}}}' Mar 16 00:11:55.457: INFO: stderr: "" Mar 16 00:11:55.457: INFO: stdout: "pod/redis-master-fm4rf patched\n" STEP: checking annotations Mar 16 00:11:55.471: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:11:55.471: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:11:55.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c799l" for this suite. Mar 16 00:12:17.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:12:17.589: INFO: namespace: e2e-tests-kubectl-c799l, resource: bindings, ignored listing per whitelist Mar 16 00:12:17.603: INFO: namespace e2e-tests-kubectl-c799l deletion completed in 22.09041366s • [SLOW TEST:25.632 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:12:17.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-cc1c8852-671a-11ea-811c-0242ac110013 STEP: Creating secret with name s-test-opt-upd-cc1c890c-671a-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cc1c8852-671a-11ea-811c-0242ac110013 STEP: Updating secret s-test-opt-upd-cc1c890c-671a-11ea-811c-0242ac110013 STEP: Creating secret with name s-test-opt-create-cc1c8950-671a-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:13:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8njxs" for this suite. Mar 16 00:14:13.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:14:13.308: INFO: namespace: e2e-tests-projected-8njxs, resource: bindings, ignored listing per whitelist Mar 16 00:14:13.351: INFO: namespace e2e-tests-projected-8njxs deletion completed in 22.099285507s • [SLOW TEST:115.748 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:14:13.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 16 00:14:13.500: INFO: Waiting up to 5m0s for pod "pod-10fa25fe-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-p68j9" to be "success or failure" Mar 16 00:14:13.585: INFO: Pod "pod-10fa25fe-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 85.143212ms Mar 16 00:14:15.590: INFO: Pod "pod-10fa25fe-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089803765s Mar 16 00:14:17.594: INFO: Pod "pod-10fa25fe-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094075997s STEP: Saw pod success Mar 16 00:14:17.594: INFO: Pod "pod-10fa25fe-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:14:17.597: INFO: Trying to get logs from node hunter-worker2 pod pod-10fa25fe-671b-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:14:17.631: INFO: Waiting for pod pod-10fa25fe-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:14:17.641: INFO: Pod pod-10fa25fe-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:14:17.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p68j9" for this suite. Mar 16 00:14:23.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:14:23.736: INFO: namespace: e2e-tests-emptydir-p68j9, resource: bindings, ignored listing per whitelist Mar 16 00:14:23.736: INFO: namespace e2e-tests-emptydir-p68j9 deletion completed in 6.091575911s • [SLOW TEST:10.385 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:14:23.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 00:14:23.895: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:23.898: INFO: Number of nodes with available pods: 0 Mar 16 00:14:23.898: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:25.786: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:25.788: INFO: Number of nodes with available pods: 0 Mar 16 00:14:25.788: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:26.095: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:26.324: INFO: Number of nodes with available pods: 0 Mar 16 00:14:26.324: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:27.071: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:27.074: INFO: Number of nodes with available pods: 0 Mar 16 00:14:27.074: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:28.041: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:28.046: INFO: Number of nodes with available pods: 0 Mar 16 00:14:28.046: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:28.940: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:28.942: INFO: Number of nodes with available pods: 0 Mar 16 00:14:28.942: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:29.904: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:29.907: INFO: Number of nodes with available pods: 1 Mar 16 00:14:29.907: INFO: Node hunter-worker2 is running more than one daemon pod Mar 16 00:14:30.902: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:30.906: INFO: Number of nodes with available pods: 2 Mar 16 00:14:30.906: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 16 00:14:30.922: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:30.925: INFO: Number of nodes with available pods: 1 Mar 16 00:14:30.925: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:32.011: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:32.014: INFO: Number of nodes with available pods: 1 Mar 16 00:14:32.014: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:32.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:32.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:32.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:33.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:33.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:33.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:34.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:34.933: INFO: Number of nodes with available pods: 1 Mar 16 00:14:34.933: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:35.945: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:35.948: INFO: Number of nodes with available pods: 1 Mar 16 00:14:35.948: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:36.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:36.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:36.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:38.030: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:38.033: INFO: Number of nodes with available pods: 1 Mar 16 00:14:38.033: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:38.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:38.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:38.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:39.929: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:39.932: INFO: Number of nodes with available pods: 1 Mar 16 00:14:39.932: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:40.944: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:40.948: INFO: Number of nodes with available pods: 1 Mar 16 00:14:40.948: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:41.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:41.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:41.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:42.930: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:42.934: INFO: Number of nodes with available pods: 1 Mar 16 00:14:42.934: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:14:43.939: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:14:43.941: INFO: Number of nodes with available pods: 2 Mar 16 00:14:43.941: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wn2b6, will wait for the garbage collector to delete the pods Mar 16 00:14:44.003: INFO: Deleting DaemonSet.extensions daemon-set took: 6.114983ms Mar 16 00:14:44.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.238942ms Mar 16 00:14:51.846: INFO: Number of nodes with available pods: 0 Mar 16 00:14:51.846: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 00:14:51.848: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wn2b6/daemonsets","resourceVersion":"58587"},"items":null} Mar 16 00:14:51.850: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wn2b6/pods","resourceVersion":"58587"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:14:51.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-wn2b6" for this suite. Mar 16 00:14:57.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:14:57.894: INFO: namespace: e2e-tests-daemonsets-wn2b6, resource: bindings, ignored listing per whitelist Mar 16 00:14:57.999: INFO: namespace e2e-tests-daemonsets-wn2b6 deletion completed in 6.138208369s • [SLOW TEST:34.262 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:14:57.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 16 00:14:58.726: INFO: created pod pod-service-account-defaultsa Mar 16 00:14:58.726: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 16 00:14:58.811: INFO: created pod pod-service-account-mountsa Mar 16 00:14:58.811: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 16 00:14:58.874: INFO: created pod pod-service-account-nomountsa Mar 16 00:14:58.874: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 16 00:14:58.887: INFO: created pod pod-service-account-defaultsa-mountspec Mar 16 00:14:58.887: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 16 00:14:58.952: INFO: created pod pod-service-account-mountsa-mountspec Mar 16 00:14:58.952: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 16 00:14:58.971: INFO: created pod pod-service-account-nomountsa-mountspec Mar 16 00:14:58.971: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 16 00:14:59.024: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 16 00:14:59.024: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 16 00:14:59.072: INFO: created pod pod-service-account-mountsa-nomountspec Mar 16 00:14:59.072: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 16 00:14:59.479: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 16 00:14:59.479: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:14:59.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-gl855" for this suite. Mar 16 00:15:28.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:15:28.615: INFO: namespace: e2e-tests-svcaccounts-gl855, resource: bindings, ignored listing per whitelist Mar 16 00:15:28.636: INFO: namespace e2e-tests-svcaccounts-gl855 deletion completed in 28.939106612s • [SLOW TEST:30.637 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:15:28.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:15:28.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-lx456" to be "success or failure" Mar 16 00:15:28.756: INFO: Pod "downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.784355ms Mar 16 00:15:30.760: INFO: Pod "downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006516066s Mar 16 00:15:32.765: INFO: Pod "downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011125784s STEP: Saw pod success Mar 16 00:15:32.765: INFO: Pod "downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:15:32.768: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:15:32.815: INFO: Waiting for pod downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:15:32.823: INFO: Pod downwardapi-volume-3dcfcda1-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:15:32.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lx456" for this suite. Mar 16 00:15:38.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:15:38.900: INFO: namespace: e2e-tests-downward-api-lx456, resource: bindings, ignored listing per whitelist Mar 16 00:15:38.923: INFO: namespace e2e-tests-downward-api-lx456 deletion completed in 6.097988231s • [SLOW TEST:10.287 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:15:38.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-jc7lr STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-jc7lr STEP: Deleting pre-stop pod Mar 16 00:15:54.077: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:15:54.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-jc7lr" for this suite. Mar 16 00:16:32.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:16:32.160: INFO: namespace: e2e-tests-prestop-jc7lr, resource: bindings, ignored listing per whitelist Mar 16 00:16:32.206: INFO: namespace e2e-tests-prestop-jc7lr deletion completed in 38.10896893s • [SLOW TEST:53.283 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:16:32.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-63b798dc-671b-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 16 00:16:32.318: INFO: Waiting up to 5m0s for pod "pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-sw6vq" to be "success or failure" Mar 16 00:16:32.321: INFO: Pod "pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400919ms Mar 16 00:16:34.325: INFO: Pod "pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006869683s Mar 16 00:16:36.328: INFO: Pod "pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010482207s STEP: Saw pod success Mar 16 00:16:36.328: INFO: Pod "pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:16:36.331: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 16 00:16:36.351: INFO: Waiting for pod pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:16:36.355: INFO: Pod pod-configmaps-63b981a6-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:16:36.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sw6vq" for this suite. Mar 16 00:16:42.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:16:42.407: INFO: namespace: e2e-tests-configmap-sw6vq, resource: bindings, ignored listing per whitelist Mar 16 00:16:42.455: INFO: namespace e2e-tests-configmap-sw6vq deletion completed in 6.096314133s • [SLOW TEST:10.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:16:42.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 16 00:16:42.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-7bzmt' Mar 16 00:16:44.890: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 16 00:16:44.890: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 16 00:16:48.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7bzmt' Mar 16 00:16:49.015: INFO: stderr: "" Mar 16 00:16:49.015: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:16:49.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7bzmt" for this suite. Mar 16 00:17:11.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:17:11.097: INFO: namespace: e2e-tests-kubectl-7bzmt, resource: bindings, ignored listing per whitelist Mar 16 00:17:11.365: INFO: namespace e2e-tests-kubectl-7bzmt deletion completed in 22.347218195s • [SLOW TEST:28.909 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:17:11.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-fkddl I0316 00:17:11.776734 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-fkddl, replica count: 1 I0316 00:17:12.827165 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 00:17:13.827401 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 00:17:14.827642 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 00:17:14.956: INFO: Created: latency-svc-jqg65 Mar 16 00:17:14.974: INFO: Got endpoints: latency-svc-jqg65 [46.669402ms] Mar 16 00:17:14.992: INFO: Created: latency-svc-rqml8 Mar 16 00:17:15.005: INFO: Got endpoints: latency-svc-rqml8 [30.996536ms] Mar 16 00:17:15.021: INFO: Created: latency-svc-hdbsh Mar 16 00:17:15.073: INFO: Got endpoints: latency-svc-hdbsh [97.950867ms] Mar 16 00:17:15.080: INFO: Created: latency-svc-r92fp Mar 16 00:17:15.105: INFO: Got endpoints: latency-svc-r92fp [130.356719ms] Mar 16 00:17:15.128: INFO: Created: latency-svc-vjsjc Mar 16 00:17:15.138: INFO: Got endpoints: latency-svc-vjsjc [162.15381ms] Mar 16 00:17:15.159: INFO: Created: latency-svc-qjbwz Mar 16 00:17:15.172: INFO: Got endpoints: latency-svc-qjbwz [196.195041ms] Mar 16 00:17:15.212: INFO: Created: latency-svc-9vrbd Mar 16 00:17:15.214: INFO: Got endpoints: latency-svc-9vrbd [238.377943ms] Mar 16 00:17:15.262: INFO: Created: latency-svc-6k2zj Mar 16 00:17:15.280: INFO: Got endpoints: latency-svc-6k2zj [304.607952ms] Mar 16 00:17:15.310: INFO: Created: latency-svc-5qnxk Mar 16 00:17:15.342: INFO: Got endpoints: latency-svc-5qnxk [366.208106ms] Mar 16 00:17:15.358: INFO: Created: latency-svc-v2r94 Mar 16 00:17:15.373: INFO: Got endpoints: latency-svc-v2r94 [397.402719ms] Mar 16 00:17:15.404: INFO: Created: latency-svc-w2mbg Mar 16 00:17:15.428: INFO: Got endpoints: latency-svc-w2mbg [451.885227ms] Mar 16 00:17:15.504: INFO: Created: latency-svc-6bj75 Mar 16 00:17:15.512: INFO: Got endpoints: latency-svc-6bj75 [535.601817ms] Mar 16 00:17:15.531: INFO: Created: latency-svc-lscsj Mar 16 00:17:15.542: INFO: Got endpoints: latency-svc-lscsj [565.732924ms] Mar 16 00:17:15.574: INFO: Created: latency-svc-qzl54 Mar 16 00:17:15.590: INFO: Got endpoints: latency-svc-qzl54 [614.037193ms] Mar 16 00:17:15.695: INFO: Created: latency-svc-2x5tf Mar 16 00:17:15.699: INFO: Got endpoints: latency-svc-2x5tf [723.479541ms] Mar 16 00:17:15.725: INFO: Created: latency-svc-qnhbl Mar 16 00:17:15.747: INFO: Got endpoints: latency-svc-qnhbl [771.437164ms] Mar 16 00:17:15.778: INFO: Created: latency-svc-77nwb Mar 16 00:17:15.786: INFO: Got endpoints: latency-svc-77nwb [780.696233ms] Mar 16 00:17:15.833: INFO: Created: latency-svc-s6kws Mar 16 00:17:15.835: INFO: Got endpoints: latency-svc-s6kws [762.468357ms] Mar 16 00:17:15.872: INFO: Created: latency-svc-2xdpb Mar 16 00:17:15.895: INFO: Got endpoints: latency-svc-2xdpb [789.911388ms] Mar 16 00:17:15.971: INFO: Created: latency-svc-lcnz4 Mar 16 00:17:15.974: INFO: Got endpoints: latency-svc-lcnz4 [836.65987ms] Mar 16 00:17:15.998: INFO: Created: latency-svc-fw6s4 Mar 16 00:17:16.054: INFO: Got endpoints: latency-svc-fw6s4 [881.754576ms] Mar 16 00:17:16.122: INFO: Created: latency-svc-hctlr Mar 16 00:17:16.126: INFO: Got endpoints: latency-svc-hctlr [912.044518ms] Mar 16 00:17:16.150: INFO: Created: latency-svc-zw75b Mar 16 00:17:16.162: INFO: Got endpoints: latency-svc-zw75b [882.071062ms] Mar 16 00:17:16.202: INFO: Created: latency-svc-fn8dt Mar 16 00:17:16.307: INFO: Got endpoints: latency-svc-fn8dt [964.504703ms] Mar 16 00:17:16.360: INFO: Created: latency-svc-6qthr Mar 16 00:17:16.373: INFO: Got endpoints: latency-svc-6qthr [999.322246ms] Mar 16 00:17:16.395: INFO: Created: latency-svc-m4x8c Mar 16 00:17:16.474: INFO: Got endpoints: latency-svc-m4x8c [1.046117709s] Mar 16 00:17:16.478: INFO: Created: latency-svc-dwpt8 Mar 16 00:17:16.481: INFO: Got endpoints: latency-svc-dwpt8 [969.779029ms] Mar 16 00:17:16.526: INFO: Created: latency-svc-nnb86 Mar 16 00:17:16.535: INFO: Got endpoints: latency-svc-nnb86 [993.621376ms] Mar 16 00:17:16.555: INFO: Created: latency-svc-8wd7j Mar 16 00:17:16.995: INFO: Got endpoints: latency-svc-8wd7j [1.404508078s] Mar 16 00:17:16.997: INFO: Created: latency-svc-n78wc Mar 16 00:17:17.002: INFO: Got endpoints: latency-svc-n78wc [1.302354201s] Mar 16 00:17:17.301: INFO: Created: latency-svc-xbf4d Mar 16 00:17:17.304: INFO: Got endpoints: latency-svc-xbf4d [1.557004713s] Mar 16 00:17:17.331: INFO: Created: latency-svc-qw572 Mar 16 00:17:17.342: INFO: Got endpoints: latency-svc-qw572 [1.55620823s] Mar 16 00:17:17.364: INFO: Created: latency-svc-lmnvl Mar 16 00:17:17.380: INFO: Got endpoints: latency-svc-lmnvl [1.544630863s] Mar 16 00:17:17.400: INFO: Created: latency-svc-9ltrc Mar 16 00:17:17.474: INFO: Got endpoints: latency-svc-9ltrc [1.578410265s] Mar 16 00:17:17.534: INFO: Created: latency-svc-rwqkk Mar 16 00:17:17.547: INFO: Got endpoints: latency-svc-rwqkk [1.573150838s] Mar 16 00:17:17.648: INFO: Created: latency-svc-mbwpc Mar 16 00:17:17.652: INFO: Got endpoints: latency-svc-mbwpc [1.597962043s] Mar 16 00:17:17.679: INFO: Created: latency-svc-dgqmz Mar 16 00:17:17.694: INFO: Got endpoints: latency-svc-dgqmz [1.568034198s] Mar 16 00:17:17.715: INFO: Created: latency-svc-nlrmt Mar 16 00:17:17.731: INFO: Got endpoints: latency-svc-nlrmt [1.568168048s] Mar 16 00:17:17.786: INFO: Created: latency-svc-pbzqz Mar 16 00:17:17.789: INFO: Got endpoints: latency-svc-pbzqz [1.481910094s] Mar 16 00:17:17.814: INFO: Created: latency-svc-xbjnz Mar 16 00:17:17.827: INFO: Got endpoints: latency-svc-xbjnz [1.453957611s] Mar 16 00:17:17.844: INFO: Created: latency-svc-4dt5c Mar 16 00:17:17.870: INFO: Got endpoints: latency-svc-4dt5c [1.395658918s] Mar 16 00:17:17.959: INFO: Created: latency-svc-ccp5j Mar 16 00:17:17.964: INFO: Got endpoints: latency-svc-ccp5j [1.482165026s] Mar 16 00:17:18.008: INFO: Created: latency-svc-pgqfw Mar 16 00:17:18.023: INFO: Got endpoints: latency-svc-pgqfw [1.487920229s] Mar 16 00:17:18.056: INFO: Created: latency-svc-v8q95 Mar 16 00:17:18.110: INFO: Got endpoints: latency-svc-v8q95 [1.115186706s] Mar 16 00:17:18.138: INFO: Created: latency-svc-zv8pr Mar 16 00:17:18.150: INFO: Got endpoints: latency-svc-zv8pr [1.147796037s] Mar 16 00:17:18.174: INFO: Created: latency-svc-xc5bg Mar 16 00:17:18.276: INFO: Got endpoints: latency-svc-xc5bg [971.494677ms] Mar 16 00:17:18.278: INFO: Created: latency-svc-62cwc Mar 16 00:17:18.288: INFO: Got endpoints: latency-svc-62cwc [945.41568ms] Mar 16 00:17:18.326: INFO: Created: latency-svc-85jft Mar 16 00:17:18.342: INFO: Got endpoints: latency-svc-85jft [962.385161ms] Mar 16 00:17:18.362: INFO: Created: latency-svc-l67p6 Mar 16 00:17:18.438: INFO: Got endpoints: latency-svc-l67p6 [963.952277ms] Mar 16 00:17:18.439: INFO: Created: latency-svc-vt66l Mar 16 00:17:18.447: INFO: Got endpoints: latency-svc-vt66l [899.953539ms] Mar 16 00:17:18.468: INFO: Created: latency-svc-ncjxb Mar 16 00:17:18.478: INFO: Got endpoints: latency-svc-ncjxb [825.973628ms] Mar 16 00:17:18.498: INFO: Created: latency-svc-jz5fj Mar 16 00:17:18.508: INFO: Got endpoints: latency-svc-jz5fj [813.637202ms] Mar 16 00:17:18.528: INFO: Created: latency-svc-dzkfw Mar 16 00:17:18.582: INFO: Got endpoints: latency-svc-dzkfw [850.931496ms] Mar 16 00:17:18.602: INFO: Created: latency-svc-fcbxd Mar 16 00:17:18.617: INFO: Got endpoints: latency-svc-fcbxd [827.719406ms] Mar 16 00:17:18.638: INFO: Created: latency-svc-f6mxx Mar 16 00:17:18.653: INFO: Got endpoints: latency-svc-f6mxx [826.176612ms] Mar 16 00:17:18.674: INFO: Created: latency-svc-gwzgg Mar 16 00:17:18.719: INFO: Got endpoints: latency-svc-gwzgg [849.539161ms] Mar 16 00:17:18.731: INFO: Created: latency-svc-tcjmc Mar 16 00:17:18.747: INFO: Got endpoints: latency-svc-tcjmc [783.472401ms] Mar 16 00:17:18.966: INFO: Created: latency-svc-rhf7s Mar 16 00:17:18.969: INFO: Got endpoints: latency-svc-rhf7s [945.712208ms] Mar 16 00:17:19.383: INFO: Created: latency-svc-z5mgw Mar 16 00:17:19.389: INFO: Got endpoints: latency-svc-z5mgw [1.278986239s] Mar 16 00:17:19.417: INFO: Created: latency-svc-cd8mm Mar 16 00:17:19.431: INFO: Got endpoints: latency-svc-cd8mm [1.2808233s] Mar 16 00:17:19.517: INFO: Created: latency-svc-vhfxb Mar 16 00:17:19.519: INFO: Got endpoints: latency-svc-vhfxb [1.243189336s] Mar 16 00:17:19.567: INFO: Created: latency-svc-x2z7x Mar 16 00:17:19.575: INFO: Got endpoints: latency-svc-x2z7x [1.286936983s] Mar 16 00:17:19.597: INFO: Created: latency-svc-zmjzs Mar 16 00:17:19.647: INFO: Got endpoints: latency-svc-zmjzs [1.30456943s] Mar 16 00:17:19.661: INFO: Created: latency-svc-v5rs2 Mar 16 00:17:19.674: INFO: Got endpoints: latency-svc-v5rs2 [1.236337629s] Mar 16 00:17:19.698: INFO: Created: latency-svc-fk2pb Mar 16 00:17:19.711: INFO: Got endpoints: latency-svc-fk2pb [1.263071581s] Mar 16 00:17:19.733: INFO: Created: latency-svc-hqrfz Mar 16 00:17:19.785: INFO: Got endpoints: latency-svc-hqrfz [1.307255266s] Mar 16 00:17:19.801: INFO: Created: latency-svc-b78k7 Mar 16 00:17:19.813: INFO: Got endpoints: latency-svc-b78k7 [1.304972923s] Mar 16 00:17:19.836: INFO: Created: latency-svc-q97gg Mar 16 00:17:19.849: INFO: Got endpoints: latency-svc-q97gg [1.267771147s] Mar 16 00:17:19.872: INFO: Created: latency-svc-zzbxh Mar 16 00:17:19.917: INFO: Got endpoints: latency-svc-zzbxh [1.299919347s] Mar 16 00:17:19.930: INFO: Created: latency-svc-6pk9g Mar 16 00:17:19.946: INFO: Got endpoints: latency-svc-6pk9g [1.292901776s] Mar 16 00:17:19.979: INFO: Created: latency-svc-m6t24 Mar 16 00:17:20.000: INFO: Got endpoints: latency-svc-m6t24 [1.281029314s] Mar 16 00:17:20.055: INFO: Created: latency-svc-q7mmj Mar 16 00:17:20.058: INFO: Got endpoints: latency-svc-q7mmj [1.310415098s] Mar 16 00:17:20.124: INFO: Created: latency-svc-76v4n Mar 16 00:17:20.199: INFO: Got endpoints: latency-svc-76v4n [1.229415746s] Mar 16 00:17:20.214: INFO: Created: latency-svc-ks7j9 Mar 16 00:17:20.234: INFO: Got endpoints: latency-svc-ks7j9 [844.691235ms] Mar 16 00:17:20.354: INFO: Created: latency-svc-lsm9j Mar 16 00:17:20.358: INFO: Got endpoints: latency-svc-lsm9j [926.975296ms] Mar 16 00:17:20.381: INFO: Created: latency-svc-kjmbd Mar 16 00:17:20.395: INFO: Got endpoints: latency-svc-kjmbd [875.4295ms] Mar 16 00:17:20.418: INFO: Created: latency-svc-cpvgg Mar 16 00:17:20.431: INFO: Got endpoints: latency-svc-cpvgg [855.985821ms] Mar 16 00:17:20.454: INFO: Created: latency-svc-wjlvh Mar 16 00:17:20.536: INFO: Created: latency-svc-qn5d5 Mar 16 00:17:20.546: INFO: Got endpoints: latency-svc-qn5d5 [871.46004ms] Mar 16 00:17:20.546: INFO: Got endpoints: latency-svc-wjlvh [898.719107ms] Mar 16 00:17:20.580: INFO: Created: latency-svc-zrsj5 Mar 16 00:17:20.594: INFO: Got endpoints: latency-svc-zrsj5 [883.407218ms] Mar 16 00:17:20.614: INFO: Created: latency-svc-b6zvf Mar 16 00:17:20.683: INFO: Got endpoints: latency-svc-b6zvf [898.010548ms] Mar 16 00:17:20.685: INFO: Created: latency-svc-l9dhr Mar 16 00:17:20.722: INFO: Got endpoints: latency-svc-l9dhr [909.125025ms] Mar 16 00:17:20.760: INFO: Created: latency-svc-q5v9f Mar 16 00:17:20.774: INFO: Got endpoints: latency-svc-q5v9f [924.907946ms] Mar 16 00:17:20.827: INFO: Created: latency-svc-8bf9g Mar 16 00:17:20.830: INFO: Got endpoints: latency-svc-8bf9g [913.940649ms] Mar 16 00:17:20.856: INFO: Created: latency-svc-hc59r Mar 16 00:17:20.871: INFO: Got endpoints: latency-svc-hc59r [925.235806ms] Mar 16 00:17:20.903: INFO: Created: latency-svc-xgdxb Mar 16 00:17:20.914: INFO: Got endpoints: latency-svc-xgdxb [913.050023ms] Mar 16 00:17:20.971: INFO: Created: latency-svc-dkk6g Mar 16 00:17:20.973: INFO: Got endpoints: latency-svc-dkk6g [915.493037ms] Mar 16 00:17:21.004: INFO: Created: latency-svc-5vlp7 Mar 16 00:17:21.016: INFO: Got endpoints: latency-svc-5vlp7 [817.631202ms] Mar 16 00:17:21.034: INFO: Created: latency-svc-7nrn7 Mar 16 00:17:21.047: INFO: Got endpoints: latency-svc-7nrn7 [813.288748ms] Mar 16 00:17:21.063: INFO: Created: latency-svc-4wqnw Mar 16 00:17:21.102: INFO: Got endpoints: latency-svc-4wqnw [744.563073ms] Mar 16 00:17:21.113: INFO: Created: latency-svc-nqz52 Mar 16 00:17:21.143: INFO: Got endpoints: latency-svc-nqz52 [748.15379ms] Mar 16 00:17:21.179: INFO: Created: latency-svc-5zb4x Mar 16 00:17:21.252: INFO: Got endpoints: latency-svc-5zb4x [820.660134ms] Mar 16 00:17:21.254: INFO: Created: latency-svc-lshwc Mar 16 00:17:21.267: INFO: Got endpoints: latency-svc-lshwc [720.654314ms] Mar 16 00:17:21.291: INFO: Created: latency-svc-xng8m Mar 16 00:17:21.303: INFO: Got endpoints: latency-svc-xng8m [756.603556ms] Mar 16 00:17:21.327: INFO: Created: latency-svc-9btwc Mar 16 00:17:21.339: INFO: Got endpoints: latency-svc-9btwc [744.813189ms] Mar 16 00:17:21.426: INFO: Created: latency-svc-gr2zx Mar 16 00:17:21.430: INFO: Got endpoints: latency-svc-gr2zx [746.306967ms] Mar 16 00:17:21.484: INFO: Created: latency-svc-tfbkl Mar 16 00:17:21.496: INFO: Got endpoints: latency-svc-tfbkl [773.327362ms] Mar 16 00:17:21.519: INFO: Created: latency-svc-7pwsd Mar 16 00:17:21.570: INFO: Got endpoints: latency-svc-7pwsd [795.127292ms] Mar 16 00:17:21.572: INFO: Created: latency-svc-wsbwj Mar 16 00:17:21.591: INFO: Got endpoints: latency-svc-wsbwj [760.754839ms] Mar 16 00:17:21.621: INFO: Created: latency-svc-ksxp5 Mar 16 00:17:21.635: INFO: Got endpoints: latency-svc-ksxp5 [763.244743ms] Mar 16 00:17:21.659: INFO: Created: latency-svc-hsm2x Mar 16 00:17:21.731: INFO: Got endpoints: latency-svc-hsm2x [817.341619ms] Mar 16 00:17:21.733: INFO: Created: latency-svc-6gdhw Mar 16 00:17:21.755: INFO: Got endpoints: latency-svc-6gdhw [781.76453ms] Mar 16 00:17:21.785: INFO: Created: latency-svc-mkn4z Mar 16 00:17:21.797: INFO: Got endpoints: latency-svc-mkn4z [780.626039ms] Mar 16 00:17:21.831: INFO: Created: latency-svc-jn2x5 Mar 16 00:17:22.192: INFO: Got endpoints: latency-svc-jn2x5 [1.144592096s] Mar 16 00:17:22.583: INFO: Created: latency-svc-j8nm9 Mar 16 00:17:22.594: INFO: Got endpoints: latency-svc-j8nm9 [1.491883661s] Mar 16 00:17:22.630: INFO: Created: latency-svc-vjwbr Mar 16 00:17:22.639: INFO: Got endpoints: latency-svc-vjwbr [1.496270743s] Mar 16 00:17:22.659: INFO: Created: latency-svc-76lhx Mar 16 00:17:22.676: INFO: Got endpoints: latency-svc-76lhx [1.424030329s] Mar 16 00:17:22.737: INFO: Created: latency-svc-t2g2d Mar 16 00:17:22.740: INFO: Got endpoints: latency-svc-t2g2d [1.473686185s] Mar 16 00:17:22.762: INFO: Created: latency-svc-gxbqz Mar 16 00:17:22.778: INFO: Got endpoints: latency-svc-gxbqz [1.475839881s] Mar 16 00:17:22.804: INFO: Created: latency-svc-8qhfl Mar 16 00:17:22.815: INFO: Got endpoints: latency-svc-8qhfl [74.304473ms] Mar 16 00:17:22.881: INFO: Created: latency-svc-bm6vl Mar 16 00:17:22.884: INFO: Got endpoints: latency-svc-bm6vl [1.545477729s] Mar 16 00:17:22.910: INFO: Created: latency-svc-52jbl Mar 16 00:17:22.923: INFO: Got endpoints: latency-svc-52jbl [1.493487768s] Mar 16 00:17:22.940: INFO: Created: latency-svc-kk2f6 Mar 16 00:17:22.964: INFO: Got endpoints: latency-svc-kk2f6 [1.468392809s] Mar 16 00:17:23.037: INFO: Created: latency-svc-gch7h Mar 16 00:17:23.041: INFO: Got endpoints: latency-svc-gch7h [1.470899616s] Mar 16 00:17:23.092: INFO: Created: latency-svc-v24hv Mar 16 00:17:23.104: INFO: Got endpoints: latency-svc-v24hv [1.512339992s] Mar 16 00:17:23.122: INFO: Created: latency-svc-v47lp Mar 16 00:17:23.134: INFO: Got endpoints: latency-svc-v47lp [1.499543868s] Mar 16 00:17:23.175: INFO: Created: latency-svc-p7vhc Mar 16 00:17:23.178: INFO: Got endpoints: latency-svc-p7vhc [1.446600576s] Mar 16 00:17:23.206: INFO: Created: latency-svc-lvglm Mar 16 00:17:23.218: INFO: Got endpoints: latency-svc-lvglm [1.463292094s] Mar 16 00:17:23.264: INFO: Created: latency-svc-nbrjn Mar 16 00:17:23.330: INFO: Got endpoints: latency-svc-nbrjn [1.53328668s] Mar 16 00:17:23.332: INFO: Created: latency-svc-hs2rl Mar 16 00:17:23.339: INFO: Got endpoints: latency-svc-hs2rl [1.147300906s] Mar 16 00:17:23.363: INFO: Created: latency-svc-crrtq Mar 16 00:17:23.392: INFO: Got endpoints: latency-svc-crrtq [797.326256ms] Mar 16 00:17:23.422: INFO: Created: latency-svc-v84vb Mar 16 00:17:23.480: INFO: Got endpoints: latency-svc-v84vb [840.340836ms] Mar 16 00:17:23.492: INFO: Created: latency-svc-cr5j2 Mar 16 00:17:23.508: INFO: Got endpoints: latency-svc-cr5j2 [832.43566ms] Mar 16 00:17:23.528: INFO: Created: latency-svc-zz5h2 Mar 16 00:17:23.538: INFO: Got endpoints: latency-svc-zz5h2 [759.604717ms] Mar 16 00:17:23.570: INFO: Created: latency-svc-2pklr Mar 16 00:17:23.648: INFO: Got endpoints: latency-svc-2pklr [832.874105ms] Mar 16 00:17:23.650: INFO: Created: latency-svc-6qnv7 Mar 16 00:17:23.659: INFO: Got endpoints: latency-svc-6qnv7 [774.332858ms] Mar 16 00:17:23.686: INFO: Created: latency-svc-h57n8 Mar 16 00:17:23.695: INFO: Got endpoints: latency-svc-h57n8 [771.992369ms] Mar 16 00:17:23.716: INFO: Created: latency-svc-ckk5d Mar 16 00:17:23.725: INFO: Got endpoints: latency-svc-ckk5d [761.021232ms] Mar 16 00:17:23.744: INFO: Created: latency-svc-r6c5z Mar 16 00:17:23.779: INFO: Got endpoints: latency-svc-r6c5z [738.230853ms] Mar 16 00:17:23.792: INFO: Created: latency-svc-swmtk Mar 16 00:17:23.804: INFO: Got endpoints: latency-svc-swmtk [699.947505ms] Mar 16 00:17:23.822: INFO: Created: latency-svc-78stb Mar 16 00:17:23.846: INFO: Got endpoints: latency-svc-78stb [711.613288ms] Mar 16 00:17:23.872: INFO: Created: latency-svc-6lcfb Mar 16 00:17:23.910: INFO: Got endpoints: latency-svc-6lcfb [732.750047ms] Mar 16 00:17:23.930: INFO: Created: latency-svc-67hfq Mar 16 00:17:23.943: INFO: Got endpoints: latency-svc-67hfq [724.126697ms] Mar 16 00:17:23.960: INFO: Created: latency-svc-5l6c2 Mar 16 00:17:23.973: INFO: Got endpoints: latency-svc-5l6c2 [642.574301ms] Mar 16 00:17:23.996: INFO: Created: latency-svc-fzm2n Mar 16 00:17:24.010: INFO: Got endpoints: latency-svc-fzm2n [670.557118ms] Mar 16 00:17:24.055: INFO: Created: latency-svc-hwbqt Mar 16 00:17:24.058: INFO: Got endpoints: latency-svc-hwbqt [666.482961ms] Mar 16 00:17:24.082: INFO: Created: latency-svc-zwsk2 Mar 16 00:17:24.094: INFO: Got endpoints: latency-svc-zwsk2 [614.343862ms] Mar 16 00:17:24.117: INFO: Created: latency-svc-5gsj5 Mar 16 00:17:24.125: INFO: Got endpoints: latency-svc-5gsj5 [616.427593ms] Mar 16 00:17:24.145: INFO: Created: latency-svc-qnpj8 Mar 16 00:17:24.192: INFO: Got endpoints: latency-svc-qnpj8 [653.995859ms] Mar 16 00:17:24.206: INFO: Created: latency-svc-stwcb Mar 16 00:17:24.242: INFO: Got endpoints: latency-svc-stwcb [594.842914ms] Mar 16 00:17:24.273: INFO: Created: latency-svc-ghzkb Mar 16 00:17:24.348: INFO: Got endpoints: latency-svc-ghzkb [688.823928ms] Mar 16 00:17:24.350: INFO: Created: latency-svc-f7nql Mar 16 00:17:24.359: INFO: Got endpoints: latency-svc-f7nql [663.867533ms] Mar 16 00:17:24.381: INFO: Created: latency-svc-rxxg8 Mar 16 00:17:24.396: INFO: Got endpoints: latency-svc-rxxg8 [670.352027ms] Mar 16 00:17:24.415: INFO: Created: latency-svc-pn649 Mar 16 00:17:24.432: INFO: Got endpoints: latency-svc-pn649 [652.962229ms] Mar 16 00:17:24.492: INFO: Created: latency-svc-42wg2 Mar 16 00:17:24.494: INFO: Got endpoints: latency-svc-42wg2 [690.439027ms] Mar 16 00:17:24.517: INFO: Created: latency-svc-fc7tv Mar 16 00:17:24.528: INFO: Got endpoints: latency-svc-fc7tv [682.280709ms] Mar 16 00:17:24.548: INFO: Created: latency-svc-mgdlf Mar 16 00:17:24.559: INFO: Got endpoints: latency-svc-mgdlf [648.291356ms] Mar 16 00:17:24.579: INFO: Created: latency-svc-54st7 Mar 16 00:17:24.641: INFO: Got endpoints: latency-svc-54st7 [698.501198ms] Mar 16 00:17:24.679: INFO: Created: latency-svc-nb5ml Mar 16 00:17:24.691: INFO: Got endpoints: latency-svc-nb5ml [718.17886ms] Mar 16 00:17:24.773: INFO: Created: latency-svc-5zlp8 Mar 16 00:17:24.776: INFO: Got endpoints: latency-svc-5zlp8 [766.400876ms] Mar 16 00:17:24.819: INFO: Created: latency-svc-pvjzd Mar 16 00:17:24.849: INFO: Got endpoints: latency-svc-pvjzd [790.67714ms] Mar 16 00:17:24.910: INFO: Created: latency-svc-4vdtz Mar 16 00:17:24.927: INFO: Got endpoints: latency-svc-4vdtz [832.595586ms] Mar 16 00:17:24.945: INFO: Created: latency-svc-wl4ql Mar 16 00:17:24.962: INFO: Got endpoints: latency-svc-wl4ql [837.459779ms] Mar 16 00:17:24.979: INFO: Created: latency-svc-gjc9l Mar 16 00:17:25.062: INFO: Created: latency-svc-qts5v Mar 16 00:17:25.075: INFO: Got endpoints: latency-svc-qts5v [832.52964ms] Mar 16 00:17:25.075: INFO: Got endpoints: latency-svc-gjc9l [882.837895ms] Mar 16 00:17:25.106: INFO: Created: latency-svc-wp6v2 Mar 16 00:17:25.119: INFO: Got endpoints: latency-svc-wp6v2 [771.349641ms] Mar 16 00:17:25.137: INFO: Created: latency-svc-m48kc Mar 16 00:17:25.149: INFO: Got endpoints: latency-svc-m48kc [790.281702ms] Mar 16 00:17:25.205: INFO: Created: latency-svc-vr5rr Mar 16 00:17:25.207: INFO: Got endpoints: latency-svc-vr5rr [811.61514ms] Mar 16 00:17:25.233: INFO: Created: latency-svc-q6m98 Mar 16 00:17:25.246: INFO: Got endpoints: latency-svc-q6m98 [814.225268ms] Mar 16 00:17:25.267: INFO: Created: latency-svc-jqbrx Mar 16 00:17:25.282: INFO: Got endpoints: latency-svc-jqbrx [787.849794ms] Mar 16 00:17:25.303: INFO: Created: latency-svc-dg79n Mar 16 00:17:25.360: INFO: Got endpoints: latency-svc-dg79n [831.442072ms] Mar 16 00:17:25.362: INFO: Created: latency-svc-lflkp Mar 16 00:17:25.381: INFO: Got endpoints: latency-svc-lflkp [821.783849ms] Mar 16 00:17:25.413: INFO: Created: latency-svc-dbgv2 Mar 16 00:17:25.427: INFO: Got endpoints: latency-svc-dbgv2 [785.589333ms] Mar 16 00:17:25.456: INFO: Created: latency-svc-2px7q Mar 16 00:17:25.515: INFO: Got endpoints: latency-svc-2px7q [824.05765ms] Mar 16 00:17:25.518: INFO: Created: latency-svc-5t74w Mar 16 00:17:25.523: INFO: Got endpoints: latency-svc-5t74w [747.281714ms] Mar 16 00:17:25.561: INFO: Created: latency-svc-qpxns Mar 16 00:17:25.596: INFO: Got endpoints: latency-svc-qpxns [746.984616ms] Mar 16 00:17:25.667: INFO: Created: latency-svc-r87q6 Mar 16 00:17:25.668: INFO: Got endpoints: latency-svc-r87q6 [741.409832ms] Mar 16 00:17:25.701: INFO: Created: latency-svc-gbd9t Mar 16 00:17:25.717: INFO: Got endpoints: latency-svc-gbd9t [754.154019ms] Mar 16 00:17:25.737: INFO: Created: latency-svc-whnxs Mar 16 00:17:25.753: INFO: Got endpoints: latency-svc-whnxs [677.797044ms] Mar 16 00:17:25.803: INFO: Created: latency-svc-ln97b Mar 16 00:17:25.807: INFO: Got endpoints: latency-svc-ln97b [731.665214ms] Mar 16 00:17:25.830: INFO: Created: latency-svc-6dc2s Mar 16 00:17:25.843: INFO: Got endpoints: latency-svc-6dc2s [723.937994ms] Mar 16 00:17:25.867: INFO: Created: latency-svc-h28fv Mar 16 00:17:25.886: INFO: Got endpoints: latency-svc-h28fv [736.071218ms] Mar 16 00:17:25.959: INFO: Created: latency-svc-tbtkn Mar 16 00:17:25.961: INFO: Got endpoints: latency-svc-tbtkn [754.156516ms] Mar 16 00:17:25.982: INFO: Created: latency-svc-zmjtj Mar 16 00:17:26.006: INFO: Got endpoints: latency-svc-zmjtj [759.825943ms] Mar 16 00:17:26.036: INFO: Created: latency-svc-j6rls Mar 16 00:17:26.048: INFO: Got endpoints: latency-svc-j6rls [765.956249ms] Mar 16 00:17:26.103: INFO: Created: latency-svc-rzs94 Mar 16 00:17:26.107: INFO: Got endpoints: latency-svc-rzs94 [746.845474ms] Mar 16 00:17:26.136: INFO: Created: latency-svc-zt9sx Mar 16 00:17:26.151: INFO: Got endpoints: latency-svc-zt9sx [770.110512ms] Mar 16 00:17:26.172: INFO: Created: latency-svc-qzf5w Mar 16 00:17:26.187: INFO: Got endpoints: latency-svc-qzf5w [760.203435ms] Mar 16 00:17:26.259: INFO: Created: latency-svc-4h7w6 Mar 16 00:17:26.270: INFO: Got endpoints: latency-svc-4h7w6 [754.522632ms] Mar 16 00:17:26.309: INFO: Created: latency-svc-ldhqn Mar 16 00:17:26.320: INFO: Got endpoints: latency-svc-ldhqn [796.096272ms] Mar 16 00:17:26.340: INFO: Created: latency-svc-5f5qq Mar 16 00:17:26.356: INFO: Got endpoints: latency-svc-5f5qq [759.915508ms] Mar 16 00:17:26.394: INFO: Created: latency-svc-ls4jw Mar 16 00:17:26.410: INFO: Got endpoints: latency-svc-ls4jw [741.992646ms] Mar 16 00:17:26.430: INFO: Created: latency-svc-lc9kc Mar 16 00:17:26.446: INFO: Got endpoints: latency-svc-lc9kc [729.511054ms] Mar 16 00:17:26.466: INFO: Created: latency-svc-92bnv Mar 16 00:17:26.558: INFO: Got endpoints: latency-svc-92bnv [804.695612ms] Mar 16 00:17:26.560: INFO: Created: latency-svc-srw9b Mar 16 00:17:26.567: INFO: Got endpoints: latency-svc-srw9b [759.870643ms] Mar 16 00:17:26.589: INFO: Created: latency-svc-pvmc6 Mar 16 00:17:26.597: INFO: Got endpoints: latency-svc-pvmc6 [754.089547ms] Mar 16 00:17:26.618: INFO: Created: latency-svc-r8x9m Mar 16 00:17:26.634: INFO: Got endpoints: latency-svc-r8x9m [748.597012ms] Mar 16 00:17:26.652: INFO: Created: latency-svc-7w7c8 Mar 16 00:17:26.713: INFO: Got endpoints: latency-svc-7w7c8 [751.473688ms] Mar 16 00:17:26.715: INFO: Created: latency-svc-529gq Mar 16 00:17:26.732: INFO: Got endpoints: latency-svc-529gq [726.2628ms] Mar 16 00:17:26.769: INFO: Created: latency-svc-2bfq7 Mar 16 00:17:26.785: INFO: Got endpoints: latency-svc-2bfq7 [736.629325ms] Mar 16 00:17:26.811: INFO: Created: latency-svc-92kp8 Mar 16 00:17:26.881: INFO: Got endpoints: latency-svc-92kp8 [774.020841ms] Mar 16 00:17:26.883: INFO: Created: latency-svc-x8mws Mar 16 00:17:26.904: INFO: Got endpoints: latency-svc-x8mws [753.390575ms] Mar 16 00:17:26.928: INFO: Created: latency-svc-qw2b9 Mar 16 00:17:26.941: INFO: Got endpoints: latency-svc-qw2b9 [754.371428ms] Mar 16 00:17:26.972: INFO: Created: latency-svc-g967p Mar 16 00:17:27.019: INFO: Got endpoints: latency-svc-g967p [748.622323ms] Mar 16 00:17:27.032: INFO: Created: latency-svc-7k5z5 Mar 16 00:17:27.050: INFO: Got endpoints: latency-svc-7k5z5 [730.32925ms] Mar 16 00:17:27.078: INFO: Created: latency-svc-pt6tn Mar 16 00:17:27.092: INFO: Got endpoints: latency-svc-pt6tn [736.402729ms] Mar 16 00:17:27.169: INFO: Created: latency-svc-lrkc7 Mar 16 00:17:27.171: INFO: Got endpoints: latency-svc-lrkc7 [760.731434ms] Mar 16 00:17:27.216: INFO: Created: latency-svc-2t4lk Mar 16 00:17:27.249: INFO: Got endpoints: latency-svc-2t4lk [802.241682ms] Mar 16 00:17:27.331: INFO: Created: latency-svc-brtg4 Mar 16 00:17:27.334: INFO: Got endpoints: latency-svc-brtg4 [776.746525ms] Mar 16 00:17:27.368: INFO: Created: latency-svc-ftj7m Mar 16 00:17:27.381: INFO: Got endpoints: latency-svc-ftj7m [814.580878ms] Mar 16 00:17:27.396: INFO: Created: latency-svc-g7xnx Mar 16 00:17:27.411: INFO: Got endpoints: latency-svc-g7xnx [813.898615ms] Mar 16 00:17:27.411: INFO: Latencies: [30.996536ms 74.304473ms 97.950867ms 130.356719ms 162.15381ms 196.195041ms 238.377943ms 304.607952ms 366.208106ms 397.402719ms 451.885227ms 535.601817ms 565.732924ms 594.842914ms 614.037193ms 614.343862ms 616.427593ms 642.574301ms 648.291356ms 652.962229ms 653.995859ms 663.867533ms 666.482961ms 670.352027ms 670.557118ms 677.797044ms 682.280709ms 688.823928ms 690.439027ms 698.501198ms 699.947505ms 711.613288ms 718.17886ms 720.654314ms 723.479541ms 723.937994ms 724.126697ms 726.2628ms 729.511054ms 730.32925ms 731.665214ms 732.750047ms 736.071218ms 736.402729ms 736.629325ms 738.230853ms 741.409832ms 741.992646ms 744.563073ms 744.813189ms 746.306967ms 746.845474ms 746.984616ms 747.281714ms 748.15379ms 748.597012ms 748.622323ms 751.473688ms 753.390575ms 754.089547ms 754.154019ms 754.156516ms 754.371428ms 754.522632ms 756.603556ms 759.604717ms 759.825943ms 759.870643ms 759.915508ms 760.203435ms 760.731434ms 760.754839ms 761.021232ms 762.468357ms 763.244743ms 765.956249ms 766.400876ms 770.110512ms 771.349641ms 771.437164ms 771.992369ms 773.327362ms 774.020841ms 774.332858ms 776.746525ms 780.626039ms 780.696233ms 781.76453ms 783.472401ms 785.589333ms 787.849794ms 789.911388ms 790.281702ms 790.67714ms 795.127292ms 796.096272ms 797.326256ms 802.241682ms 804.695612ms 811.61514ms 813.288748ms 813.637202ms 813.898615ms 814.225268ms 814.580878ms 817.341619ms 817.631202ms 820.660134ms 821.783849ms 824.05765ms 825.973628ms 826.176612ms 827.719406ms 831.442072ms 832.43566ms 832.52964ms 832.595586ms 832.874105ms 836.65987ms 837.459779ms 840.340836ms 844.691235ms 849.539161ms 850.931496ms 855.985821ms 871.46004ms 875.4295ms 881.754576ms 882.071062ms 882.837895ms 883.407218ms 898.010548ms 898.719107ms 899.953539ms 909.125025ms 912.044518ms 913.050023ms 913.940649ms 915.493037ms 924.907946ms 925.235806ms 926.975296ms 945.41568ms 945.712208ms 962.385161ms 963.952277ms 964.504703ms 969.779029ms 971.494677ms 993.621376ms 999.322246ms 1.046117709s 1.115186706s 1.144592096s 1.147300906s 1.147796037s 1.229415746s 1.236337629s 1.243189336s 1.263071581s 1.267771147s 1.278986239s 1.2808233s 1.281029314s 1.286936983s 1.292901776s 1.299919347s 1.302354201s 1.30456943s 1.304972923s 1.307255266s 1.310415098s 1.395658918s 1.404508078s 1.424030329s 1.446600576s 1.453957611s 1.463292094s 1.468392809s 1.470899616s 1.473686185s 1.475839881s 1.481910094s 1.482165026s 1.487920229s 1.491883661s 1.493487768s 1.496270743s 1.499543868s 1.512339992s 1.53328668s 1.544630863s 1.545477729s 1.55620823s 1.557004713s 1.568034198s 1.568168048s 1.573150838s 1.578410265s 1.597962043s] Mar 16 00:17:27.411: INFO: 50 %ile: 813.288748ms Mar 16 00:17:27.411: INFO: 90 %ile: 1.473686185s Mar 16 00:17:27.411: INFO: 99 %ile: 1.578410265s Mar 16 00:17:27.411: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:17:27.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-fkddl" for this suite. Mar 16 00:18:07.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:18:07.520: INFO: namespace: e2e-tests-svc-latency-fkddl, resource: bindings, ignored listing per whitelist Mar 16 00:18:07.570: INFO: namespace e2e-tests-svc-latency-fkddl deletion completed in 40.152406215s • [SLOW TEST:56.205 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:18:07.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9caab097-671b-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:18:07.857: INFO: Waiting up to 5m0s for pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-54m8m" to be "success or failure" Mar 16 00:18:07.990: INFO: Pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 132.483036ms Mar 16 00:18:09.997: INFO: Pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140242907s Mar 16 00:18:12.019: INFO: Pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.162070153s Mar 16 00:18:14.023: INFO: Pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166236823s STEP: Saw pod success Mar 16 00:18:14.024: INFO: Pod "pod-secrets-9cab1547-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:18:14.026: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9cab1547-671b-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 16 00:18:14.066: INFO: Waiting for pod pod-secrets-9cab1547-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:18:14.077: INFO: Pod pod-secrets-9cab1547-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:18:14.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-54m8m" for this suite. Mar 16 00:18:20.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:18:20.105: INFO: namespace: e2e-tests-secrets-54m8m, resource: bindings, ignored listing per whitelist Mar 16 00:18:20.171: INFO: namespace e2e-tests-secrets-54m8m deletion completed in 6.089482074s • [SLOW TEST:12.600 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:18:20.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-a41584f0-671b-11ea-811c-0242ac110013 STEP: Creating secret with name s-test-opt-upd-a415856f-671b-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a41584f0-671b-11ea-811c-0242ac110013 STEP: Updating secret s-test-opt-upd-a415856f-671b-11ea-811c-0242ac110013 STEP: Creating secret with name s-test-opt-create-a41585a3-671b-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:19:46.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wv82h" for this suite. Mar 16 00:20:10.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:20:11.027: INFO: namespace: e2e-tests-secrets-wv82h, resource: bindings, ignored listing per whitelist Mar 16 00:20:11.072: INFO: namespace e2e-tests-secrets-wv82h deletion completed in 24.089047328s • [SLOW TEST:110.901 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:20:11.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 16 00:20:12.174: INFO: Waiting up to 5m0s for pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-j5772" to be "success or failure" Mar 16 00:20:12.248: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 73.846624ms Mar 16 00:20:14.251: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076764777s Mar 16 00:20:16.368: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19416321s Mar 16 00:20:18.543: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368390091s Mar 16 00:20:20.550: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.375629648s STEP: Saw pod success Mar 16 00:20:20.550: INFO: Pod "downward-api-e6a3566b-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:20:20.553: INFO: Trying to get logs from node hunter-worker pod downward-api-e6a3566b-671b-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 16 00:20:20.717: INFO: Waiting for pod downward-api-e6a3566b-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:20:20.914: INFO: Pod downward-api-e6a3566b-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:20:20.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j5772" for this suite. Mar 16 00:20:27.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:20:27.236: INFO: namespace: e2e-tests-downward-api-j5772, resource: bindings, ignored listing per whitelist Mar 16 00:20:27.278: INFO: namespace e2e-tests-downward-api-j5772 deletion completed in 6.359600798s • [SLOW TEST:16.206 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:20:27.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 00:20:27.417: INFO: Waiting up to 5m0s for pod "pod-efd93798-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-424fr" to be "success or failure" Mar 16 00:20:27.420: INFO: Pod "pod-efd93798-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129507ms Mar 16 00:20:29.453: INFO: Pod "pod-efd93798-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035795256s Mar 16 00:20:31.458: INFO: Pod "pod-efd93798-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040250021s STEP: Saw pod success Mar 16 00:20:31.458: INFO: Pod "pod-efd93798-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:20:31.461: INFO: Trying to get logs from node hunter-worker2 pod pod-efd93798-671b-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:20:31.586: INFO: Waiting for pod pod-efd93798-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:20:31.600: INFO: Pod pod-efd93798-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:20:31.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-424fr" for this suite. Mar 16 00:20:39.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:20:39.692: INFO: namespace: e2e-tests-emptydir-424fr, resource: bindings, ignored listing per whitelist Mar 16 00:20:39.706: INFO: namespace e2e-tests-emptydir-424fr deletion completed in 8.102054801s • [SLOW TEST:12.427 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:20:39.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-v4jvs/configmap-test-f73d50b4-671b-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 16 00:20:39.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-v4jvs" to be "success or failure" Mar 16 00:20:39.846: INFO: Pod "pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 14.190251ms Mar 16 00:20:41.859: INFO: Pod "pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026349513s Mar 16 00:20:43.863: INFO: Pod "pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031014243s STEP: Saw pod success Mar 16 00:20:43.863: INFO: Pod "pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:20:43.866: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013 container env-test: STEP: delete the pod Mar 16 00:20:43.951: INFO: Waiting for pod pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:20:43.973: INFO: Pod pod-configmaps-f73f1e81-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:20:43.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v4jvs" for this suite. Mar 16 00:20:50.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:20:50.017: INFO: namespace: e2e-tests-configmap-v4jvs, resource: bindings, ignored listing per whitelist Mar 16 00:20:50.072: INFO: namespace e2e-tests-configmap-v4jvs deletion completed in 6.095168918s • [SLOW TEST:10.366 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:20:50.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 16 00:20:50.280: INFO: Waiting up to 5m0s for pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013" in namespace "e2e-tests-containers-59fj4" to be "success or failure" Mar 16 00:20:50.290: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 9.823629ms Mar 16 00:20:52.294: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013826809s Mar 16 00:20:54.356: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07579674s Mar 16 00:20:56.369: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089042943s Mar 16 00:20:58.373: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093381372s STEP: Saw pod success Mar 16 00:20:58.373: INFO: Pod "client-containers-fd7a01a8-671b-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:20:58.377: INFO: Trying to get logs from node hunter-worker2 pod client-containers-fd7a01a8-671b-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:20:58.399: INFO: Waiting for pod client-containers-fd7a01a8-671b-11ea-811c-0242ac110013 to disappear Mar 16 00:20:58.403: INFO: Pod client-containers-fd7a01a8-671b-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:20:58.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-59fj4" for this suite. Mar 16 00:21:04.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:21:04.494: INFO: namespace: e2e-tests-containers-59fj4, resource: bindings, ignored listing per whitelist Mar 16 00:21:04.499: INFO: namespace e2e-tests-containers-59fj4 deletion completed in 6.092415166s • [SLOW TEST:14.428 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:21:04.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-05ff9c92-671c-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:21:04.858: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-tvgmq" to be "success or failure" Mar 16 00:21:04.865: INFO: Pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 7.384162ms Mar 16 00:21:06.868: INFO: Pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010177028s Mar 16 00:21:08.896: INFO: Pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037941572s Mar 16 00:21:10.900: INFO: Pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042321095s STEP: Saw pod success Mar 16 00:21:10.900: INFO: Pod "pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:21:10.903: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013 container projected-secret-volume-test: STEP: delete the pod Mar 16 00:21:11.041: INFO: Waiting for pod pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:21:11.171: INFO: Pod pod-projected-secrets-0602371c-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:21:11.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tvgmq" for this suite. Mar 16 00:21:17.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:21:17.221: INFO: namespace: e2e-tests-projected-tvgmq, resource: bindings, ignored listing per whitelist Mar 16 00:21:17.272: INFO: namespace e2e-tests-projected-tvgmq deletion completed in 6.097155961s • [SLOW TEST:12.772 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:21:17.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0d9d8d93-671c-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 16 00:21:17.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-configmap-ch72v" to be "success or failure" Mar 16 00:21:17.399: INFO: Pod "pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851229ms Mar 16 00:21:19.402: INFO: Pod "pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012646148s Mar 16 00:21:21.406: INFO: Pod "pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016447881s STEP: Saw pod success Mar 16 00:21:21.406: INFO: Pod "pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:21:21.409: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013 container configmap-volume-test: STEP: delete the pod Mar 16 00:21:21.495: INFO: Waiting for pod pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:21:21.652: INFO: Pod pod-configmaps-0da055fc-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:21:21.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ch72v" for this suite. Mar 16 00:21:28.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:21:28.710: INFO: namespace: e2e-tests-configmap-ch72v, resource: bindings, ignored listing per whitelist Mar 16 00:21:28.748: INFO: namespace e2e-tests-configmap-ch72v deletion completed in 7.091485157s • [SLOW TEST:11.476 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:21:28.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-14c6ec1c-671c-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:21:29.369: INFO: Waiting up to 5m0s for pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-r6mff" to be "success or failure" Mar 16 00:21:29.374: INFO: Pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74436ms Mar 16 00:21:31.378: INFO: Pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008401278s Mar 16 00:21:33.448: INFO: Pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.078103212s Mar 16 00:21:35.586: INFO: Pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.216071678s STEP: Saw pod success Mar 16 00:21:35.586: INFO: Pod "pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:21:35.772: INFO: Trying to get logs from node hunter-worker pod pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013 container secret-volume-test: STEP: delete the pod Mar 16 00:21:35.868: INFO: Waiting for pod pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:21:35.927: INFO: Pod pod-secrets-14c80f0b-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:21:35.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-r6mff" for this suite. Mar 16 00:21:42.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:21:42.062: INFO: namespace: e2e-tests-secrets-r6mff, resource: bindings, ignored listing per whitelist Mar 16 00:21:42.080: INFO: namespace e2e-tests-secrets-r6mff deletion completed in 6.149332048s • [SLOW TEST:13.332 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:21:42.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 16 00:21:42.263: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-27c6f" to be "success or failure" Mar 16 00:21:42.279: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.769123ms Mar 16 00:21:44.285: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02197385s Mar 16 00:21:46.289: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025924262s Mar 16 00:21:48.293: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029277926s STEP: Saw pod success Mar 16 00:21:48.293: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 16 00:21:48.295: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 16 00:21:48.480: INFO: Waiting for pod pod-host-path-test to disappear Mar 16 00:21:48.501: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:21:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-27c6f" for this suite. Mar 16 00:21:56.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:21:56.608: INFO: namespace: e2e-tests-hostpath-27c6f, resource: bindings, ignored listing per whitelist Mar 16 00:21:56.627: INFO: namespace e2e-tests-hostpath-27c6f deletion completed in 8.122044364s • [SLOW TEST:14.546 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:21:56.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 16 00:21:56.706: INFO: namespace e2e-tests-kubectl-cnzdk Mar 16 00:21:56.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cnzdk' Mar 16 00:21:57.064: INFO: stderr: "" Mar 16 00:21:57.064: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 16 00:21:58.068: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:21:58.068: INFO: Found 0 / 1 Mar 16 00:21:59.075: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:21:59.076: INFO: Found 0 / 1 Mar 16 00:22:00.069: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:22:00.069: INFO: Found 0 / 1 Mar 16 00:22:01.068: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:22:01.068: INFO: Found 1 / 1 Mar 16 00:22:01.068: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 00:22:01.077: INFO: Selector matched 1 pods for map[app:redis] Mar 16 00:22:01.077: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 00:22:01.077: INFO: wait on redis-master startup in e2e-tests-kubectl-cnzdk Mar 16 00:22:01.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sgssl redis-master --namespace=e2e-tests-kubectl-cnzdk' Mar 16 00:22:01.193: INFO: stderr: "" Mar 16 00:22:01.193: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Mar 00:21:59.461 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Mar 00:21:59.461 # Server started, Redis version 3.2.12\n1:M 16 Mar 00:21:59.461 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Mar 00:21:59.461 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 16 00:22:01.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-cnzdk' Mar 16 00:22:01.390: INFO: stderr: "" Mar 16 00:22:01.390: INFO: stdout: "service/rm2 exposed\n" Mar 16 00:22:01.394: INFO: Service rm2 in namespace e2e-tests-kubectl-cnzdk found. STEP: exposing service Mar 16 00:22:03.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-cnzdk' Mar 16 00:22:03.812: INFO: stderr: "" Mar 16 00:22:03.812: INFO: stdout: "service/rm3 exposed\n" Mar 16 00:22:03.824: INFO: Service rm3 in namespace e2e-tests-kubectl-cnzdk found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:22:05.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cnzdk" for this suite. Mar 16 00:22:29.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:22:29.866: INFO: namespace: e2e-tests-kubectl-cnzdk, resource: bindings, ignored listing per whitelist Mar 16 00:22:29.921: INFO: namespace e2e-tests-kubectl-cnzdk deletion completed in 24.085565019s • [SLOW TEST:33.294 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:22:29.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 00:22:30.065: INFO: Waiting up to 5m0s for pod "pod-38f529a1-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-kc59g" to be "success or failure" Mar 16 00:22:30.076: INFO: Pod "pod-38f529a1-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.456969ms Mar 16 00:22:32.080: INFO: Pod "pod-38f529a1-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014841041s Mar 16 00:22:34.191: INFO: Pod "pod-38f529a1-671c-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.125085769s Mar 16 00:22:36.194: INFO: Pod "pod-38f529a1-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128425194s STEP: Saw pod success Mar 16 00:22:36.194: INFO: Pod "pod-38f529a1-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:22:36.196: INFO: Trying to get logs from node hunter-worker2 pod pod-38f529a1-671c-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:22:36.209: INFO: Waiting for pod pod-38f529a1-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:22:36.228: INFO: Pod pod-38f529a1-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:22:36.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kc59g" for this suite. Mar 16 00:22:44.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:22:44.299: INFO: namespace: e2e-tests-emptydir-kc59g, resource: bindings, ignored listing per whitelist Mar 16 00:22:44.325: INFO: namespace e2e-tests-emptydir-kc59g deletion completed in 8.094576707s • [SLOW TEST:14.404 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:22:44.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 16 00:22:44.426: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix530088975/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:22:44.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2zzkl" for this suite. Mar 16 00:22:50.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:22:50.701: INFO: namespace: e2e-tests-kubectl-2zzkl, resource: bindings, ignored listing per whitelist Mar 16 00:22:50.739: INFO: namespace e2e-tests-kubectl-2zzkl deletion completed in 6.247680391s • [SLOW TEST:6.413 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:22:50.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-45714360-671c-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:22:51.044: INFO: Waiting up to 5m0s for pod "pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-secrets-88xq9" to be "success or failure" Mar 16 00:22:51.054: INFO: Pod "pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011709ms Mar 16 00:22:53.058: INFO: Pod "pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014342535s Mar 16 00:22:55.062: INFO: Pod "pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018427396s STEP: Saw pod success Mar 16 00:22:55.062: INFO: Pod "pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:22:55.065: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013 container secret-env-test: STEP: delete the pod Mar 16 00:22:55.096: INFO: Waiting for pod pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:22:55.102: INFO: Pod pod-secrets-4575bc5c-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:22:55.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-88xq9" for this suite. Mar 16 00:23:01.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:01.170: INFO: namespace: e2e-tests-secrets-88xq9, resource: bindings, ignored listing per whitelist Mar 16 00:23:01.209: INFO: namespace e2e-tests-secrets-88xq9 deletion completed in 6.104681503s • [SLOW TEST:10.469 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:23:01.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:23:07.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-7b5lx" for this suite. Mar 16 00:23:13.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:13.650: INFO: namespace: e2e-tests-namespaces-7b5lx, resource: bindings, ignored listing per whitelist Mar 16 00:23:13.660: INFO: namespace e2e-tests-namespaces-7b5lx deletion completed in 6.08740469s STEP: Destroying namespace "e2e-tests-nsdeletetest-xcdnq" for this suite. Mar 16 00:23:13.663: INFO: Namespace e2e-tests-nsdeletetest-xcdnq was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-cz7ct" for this suite. Mar 16 00:23:19.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:19.718: INFO: namespace: e2e-tests-nsdeletetest-cz7ct, resource: bindings, ignored listing per whitelist Mar 16 00:23:19.752: INFO: namespace e2e-tests-nsdeletetest-cz7ct deletion completed in 6.089266585s • [SLOW TEST:18.543 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:23:19.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 16 00:23:19.845: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:23:19.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t58gn" for this suite. Mar 16 00:23:25.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:26.020: INFO: namespace: e2e-tests-kubectl-t58gn, resource: bindings, ignored listing per whitelist Mar 16 00:23:26.024: INFO: namespace e2e-tests-kubectl-t58gn deletion completed in 6.100435381s • [SLOW TEST:6.271 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:23:26.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:23:26.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-xwgjg" to be "success or failure" Mar 16 00:23:26.312: INFO: Pod "downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585333ms Mar 16 00:23:28.316: INFO: Pod "downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008379288s Mar 16 00:23:30.323: INFO: Pod "downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015342523s STEP: Saw pod success Mar 16 00:23:30.323: INFO: Pod "downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:23:30.326: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:23:30.357: INFO: Waiting for pod downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:23:30.377: INFO: Pod downwardapi-volume-5a7ace12-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:23:30.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xwgjg" for this suite. Mar 16 00:23:36.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:36.403: INFO: namespace: e2e-tests-projected-xwgjg, resource: bindings, ignored listing per whitelist Mar 16 00:23:36.469: INFO: namespace e2e-tests-projected-xwgjg deletion completed in 6.088396027s • [SLOW TEST:10.445 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:23:36.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 00:23:36.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:36.611: INFO: Number of nodes with available pods: 0 Mar 16 00:23:36.611: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:23:37.616: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:37.619: INFO: Number of nodes with available pods: 0 Mar 16 00:23:37.619: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:23:38.774: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:38.797: INFO: Number of nodes with available pods: 0 Mar 16 00:23:38.797: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:23:39.671: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:39.674: INFO: Number of nodes with available pods: 0 Mar 16 00:23:39.674: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:23:40.614: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:40.617: INFO: Number of nodes with available pods: 1 Mar 16 00:23:40.617: INFO: Node hunter-worker2 is running more than one daemon pod Mar 16 00:23:41.616: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:41.619: INFO: Number of nodes with available pods: 2 Mar 16 00:23:41.619: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 16 00:23:41.637: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 00:23:41.642: INFO: Number of nodes with available pods: 2 Mar 16 00:23:41.642: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gscn9, will wait for the garbage collector to delete the pods Mar 16 00:23:42.747: INFO: Deleting DaemonSet.extensions daemon-set took: 5.814771ms Mar 16 00:23:42.847: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.221939ms Mar 16 00:23:51.751: INFO: Number of nodes with available pods: 0 Mar 16 00:23:51.751: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 00:23:51.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gscn9/daemonsets","resourceVersion":"61616"},"items":null} Mar 16 00:23:51.756: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gscn9/pods","resourceVersion":"61616"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:23:51.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gscn9" for this suite. Mar 16 00:23:57.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:23:57.881: INFO: namespace: e2e-tests-daemonsets-gscn9, resource: bindings, ignored listing per whitelist Mar 16 00:23:57.896: INFO: namespace e2e-tests-daemonsets-gscn9 deletion completed in 6.125706111s • [SLOW TEST:21.427 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:23:57.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 16 00:24:02.016: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6d5e7b34-671c-11ea-811c-0242ac110013,GenerateName:,Namespace:e2e-tests-events-kqxl8,SelfLink:/api/v1/namespaces/e2e-tests-events-kqxl8/pods/send-events-6d5e7b34-671c-11ea-811c-0242ac110013,UID:6d6075bd-671c-11ea-99e8-0242ac110002,ResourceVersion:61668,Generation:0,CreationTimestamp:2020-03-16 00:23:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 987517629,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-drggq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-drggq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-drggq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025313f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:23:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:24:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:24:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 00:23:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.68,StartTime:2020-03-16 00:23:58 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-16 00:24:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://6207bb30ca55559f4b8db7800bd77b39a459dc849d60c7484a8c3507ef1264f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 16 00:24:04.022: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 16 00:24:06.026: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:24:06.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-kqxl8" for this suite. Mar 16 00:24:44.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:24:44.092: INFO: namespace: e2e-tests-events-kqxl8, resource: bindings, ignored listing per whitelist Mar 16 00:24:44.152: INFO: namespace e2e-tests-events-kqxl8 deletion completed in 38.099288914s • [SLOW TEST:46.256 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:24:44.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-88f4c9dc-671c-11ea-811c-0242ac110013 STEP: Creating configMap with name cm-test-opt-upd-88f4ca38-671c-11ea-811c-0242ac110013 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-88f4c9dc-671c-11ea-811c-0242ac110013 STEP: Updating configmap cm-test-opt-upd-88f4ca38-671c-11ea-811c-0242ac110013 STEP: Creating configMap with name cm-test-opt-create-88f4ca61-671c-11ea-811c-0242ac110013 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:24:52.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zcrkd" for this suite. Mar 16 00:25:16.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:25:16.472: INFO: namespace: e2e-tests-configmap-zcrkd, resource: bindings, ignored listing per whitelist Mar 16 00:25:16.517: INFO: namespace e2e-tests-configmap-zcrkd deletion completed in 24.112080326s • [SLOW TEST:32.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:25:16.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-5hnxl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5hnxl to expose endpoints map[] Mar 16 00:25:16.668: INFO: Get endpoints failed (8.444522ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 16 00:25:17.672: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5hnxl exposes endpoints map[] (1.011884042s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-5hnxl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5hnxl to expose endpoints map[pod1:[80]] Mar 16 00:25:20.709: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5hnxl exposes endpoints map[pod1:[80]] (3.030488787s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-5hnxl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5hnxl to expose endpoints map[pod1:[80] pod2:[80]] Mar 16 00:25:23.770: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5hnxl exposes endpoints map[pod1:[80] pod2:[80]] (3.057592527s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-5hnxl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5hnxl to expose endpoints map[pod2:[80]] Mar 16 00:25:24.804: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5hnxl exposes endpoints map[pod2:[80]] (1.030268605s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-5hnxl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5hnxl to expose endpoints map[] Mar 16 00:25:25.828: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5hnxl exposes endpoints map[] (1.016092724s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:25:25.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5hnxl" for this suite. Mar 16 00:25:48.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:25:48.187: INFO: namespace: e2e-tests-services-5hnxl, resource: bindings, ignored listing per whitelist Mar 16 00:25:48.207: INFO: namespace e2e-tests-services-5hnxl deletion completed in 22.205814535s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.689 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:25:48.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 16 00:25:48.321: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:25:52.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-l6b44" for this suite. Mar 16 00:26:30.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:26:30.446: INFO: namespace: e2e-tests-pods-l6b44, resource: bindings, ignored listing per whitelist Mar 16 00:26:30.461: INFO: namespace e2e-tests-pods-l6b44 deletion completed in 38.090847699s • [SLOW TEST:42.253 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:26:30.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 00:26:30.558: INFO: Waiting up to 5m0s for pod "pod-c84c2a02-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-wlrk9" to be "success or failure" Mar 16 00:26:30.575: INFO: Pod "pod-c84c2a02-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.478596ms Mar 16 00:26:32.579: INFO: Pod "pod-c84c2a02-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020543654s Mar 16 00:26:34.583: INFO: Pod "pod-c84c2a02-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024206381s STEP: Saw pod success Mar 16 00:26:34.583: INFO: Pod "pod-c84c2a02-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:26:34.586: INFO: Trying to get logs from node hunter-worker2 pod pod-c84c2a02-671c-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:26:34.618: INFO: Waiting for pod pod-c84c2a02-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:26:34.628: INFO: Pod pod-c84c2a02-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:26:34.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wlrk9" for this suite. Mar 16 00:26:40.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:26:40.682: INFO: namespace: e2e-tests-emptydir-wlrk9, resource: bindings, ignored listing per whitelist Mar 16 00:26:40.719: INFO: namespace e2e-tests-emptydir-wlrk9 deletion completed in 6.087499801s • [SLOW TEST:10.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:26:40.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:26:40.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-pwvqc" to be "success or failure" Mar 16 00:26:40.854: INFO: Pod "downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 9.429114ms Mar 16 00:26:42.862: INFO: Pod "downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0173804s Mar 16 00:26:44.865: INFO: Pod "downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020910084s STEP: Saw pod success Mar 16 00:26:44.865: INFO: Pod "downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:26:44.868: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:26:44.895: INFO: Waiting for pod downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:26:44.925: INFO: Pod downwardapi-volume-ce6e3eac-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:26:44.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pwvqc" for this suite. Mar 16 00:26:50.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:26:50.980: INFO: namespace: e2e-tests-downward-api-pwvqc, resource: bindings, ignored listing per whitelist Mar 16 00:26:51.016: INFO: namespace e2e-tests-downward-api-pwvqc deletion completed in 6.088173349s • [SLOW TEST:10.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:26:51.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:26:51.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-q68dw" to be "success or failure" Mar 16 00:26:51.315: INFO: Pod "downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 77.26317ms Mar 16 00:26:53.319: INFO: Pod "downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081466484s Mar 16 00:26:55.322: INFO: Pod "downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085064501s STEP: Saw pod success Mar 16 00:26:55.323: INFO: Pod "downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:26:55.326: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:26:55.343: INFO: Waiting for pod downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:26:55.348: INFO: Pod downwardapi-volume-d4a0586a-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:26:55.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q68dw" for this suite. Mar 16 00:27:01.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:27:01.404: INFO: namespace: e2e-tests-projected-q68dw, resource: bindings, ignored listing per whitelist Mar 16 00:27:01.445: INFO: namespace e2e-tests-projected-q68dw deletion completed in 6.093751954s • [SLOW TEST:10.429 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:27:01.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 16 00:27:01.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-pthqw" to be "success or failure" Mar 16 00:27:01.603: INFO: Pod "downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 23.137181ms Mar 16 00:27:03.608: INFO: Pod "downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027711638s Mar 16 00:27:05.611: INFO: Pod "downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031346439s STEP: Saw pod success Mar 16 00:27:05.611: INFO: Pod "downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:27:05.614: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013 container client-container: STEP: delete the pod Mar 16 00:27:05.630: INFO: Waiting for pod downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:27:05.635: INFO: Pod downwardapi-volume-dacb87f0-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:27:05.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pthqw" for this suite. Mar 16 00:27:11.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:27:11.675: INFO: namespace: e2e-tests-downward-api-pthqw, resource: bindings, ignored listing per whitelist Mar 16 00:27:11.744: INFO: namespace e2e-tests-downward-api-pthqw deletion completed in 6.106073544s • [SLOW TEST:10.299 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:27:11.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 16 00:27:11.862: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62272,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 16 00:27:11.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62273,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 16 00:27:11.862: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62274,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 16 00:27:21.934: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62294,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 16 00:27:21.934: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62295,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 16 00:27:21.934: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-gw5vc,SelfLink:/api/v1/namespaces/e2e-tests-watch-gw5vc/configmaps/e2e-watch-test-label-changed,UID:e0e82517-671c-11ea-99e8-0242ac110002,ResourceVersion:62296,Generation:0,CreationTimestamp:2020-03-16 00:27:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:27:21.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-gw5vc" for this suite. Mar 16 00:27:27.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:27:27.983: INFO: namespace: e2e-tests-watch-gw5vc, resource: bindings, ignored listing per whitelist Mar 16 00:27:28.021: INFO: namespace e2e-tests-watch-gw5vc deletion completed in 6.081843497s • [SLOW TEST:16.276 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:27:28.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 16 00:27:32.409: INFO: Waiting up to 5m0s for pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013" in namespace "e2e-tests-pods-clvsv" to be "success or failure" Mar 16 00:27:32.459: INFO: Pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 49.617545ms Mar 16 00:27:34.572: INFO: Pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163381051s Mar 16 00:27:36.576: INFO: Pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.166834997s Mar 16 00:27:38.580: INFO: Pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.171028037s STEP: Saw pod success Mar 16 00:27:38.580: INFO: Pod "client-envvars-ed29be47-671c-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:27:38.583: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-ed29be47-671c-11ea-811c-0242ac110013 container env3cont: STEP: delete the pod Mar 16 00:27:38.687: INFO: Waiting for pod client-envvars-ed29be47-671c-11ea-811c-0242ac110013 to disappear Mar 16 00:27:38.690: INFO: Pod client-envvars-ed29be47-671c-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:27:38.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-clvsv" for this suite. Mar 16 00:28:26.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:28:26.784: INFO: namespace: e2e-tests-pods-clvsv, resource: bindings, ignored listing per whitelist Mar 16 00:28:26.792: INFO: namespace e2e-tests-pods-clvsv deletion completed in 48.097549711s • [SLOW TEST:58.771 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:28:26.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 16 00:28:30.910: INFO: Pod pod-hostip-0da5086f-671d-11ea-811c-0242ac110013 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:28:30.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8z8c7" for this suite. Mar 16 00:28:52.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:28:52.951: INFO: namespace: e2e-tests-pods-8z8c7, resource: bindings, ignored listing per whitelist Mar 16 00:28:53.004: INFO: namespace e2e-tests-pods-8z8c7 deletion completed in 22.090533427s • [SLOW TEST:26.211 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:28:53.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-1d4d738d-671d-11ea-811c-0242ac110013 STEP: Creating a pod to test consume configMaps Mar 16 00:28:53.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-4qsv7" to be "success or failure" Mar 16 00:28:53.281: INFO: Pod "pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 90.456745ms Mar 16 00:28:55.376: INFO: Pod "pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185113203s Mar 16 00:28:57.380: INFO: Pod "pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189392016s STEP: Saw pod success Mar 16 00:28:57.380: INFO: Pod "pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:28:57.384: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013 container projected-configmap-volume-test: STEP: delete the pod Mar 16 00:28:57.436: INFO: Waiting for pod pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013 to disappear Mar 16 00:28:57.440: INFO: Pod pod-projected-configmaps-1d50e2f6-671d-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:28:57.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4qsv7" for this suite. Mar 16 00:29:03.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:29:03.533: INFO: namespace: e2e-tests-projected-4qsv7, resource: bindings, ignored listing per whitelist Mar 16 00:29:03.559: INFO: namespace e2e-tests-projected-4qsv7 deletion completed in 6.116881851s • [SLOW TEST:10.556 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:29:03.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-238d5f42-671d-11ea-811c-0242ac110013 STEP: Creating a pod to test consume secrets Mar 16 00:29:03.664: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013" in namespace "e2e-tests-projected-xgs98" to be "success or failure" Mar 16 00:29:03.668: INFO: Pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363053ms Mar 16 00:29:05.698: INFO: Pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034387685s Mar 16 00:29:07.703: INFO: Pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013": Phase="Running", Reason="", readiness=true. Elapsed: 4.03855102s Mar 16 00:29:09.707: INFO: Pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043121563s STEP: Saw pod success Mar 16 00:29:09.707: INFO: Pod "pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:29:09.710: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013 container projected-secret-volume-test: STEP: delete the pod Mar 16 00:29:09.726: INFO: Waiting for pod pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013 to disappear Mar 16 00:29:09.730: INFO: Pod pod-projected-secrets-238fedbf-671d-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:29:09.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xgs98" for this suite. Mar 16 00:29:15.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:29:15.794: INFO: namespace: e2e-tests-projected-xgs98, resource: bindings, ignored listing per whitelist Mar 16 00:29:15.836: INFO: namespace e2e-tests-projected-xgs98 deletion completed in 6.103579172s • [SLOW TEST:12.276 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:29:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 00:29:23.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 00:29:24.013: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 00:29:26.013: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 00:29:26.016: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 00:29:28.013: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 00:29:28.018: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 00:29:30.013: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 00:29:30.017: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 00:29:32.013: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 00:29:32.017: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:29:32.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vd2nm" for this suite. Mar 16 00:29:56.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:29:56.100: INFO: namespace: e2e-tests-container-lifecycle-hook-vd2nm, resource: bindings, ignored listing per whitelist Mar 16 00:29:56.139: INFO: namespace e2e-tests-container-lifecycle-hook-vd2nm deletion completed in 24.1113524s • [SLOW TEST:40.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:29:56.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 16 00:29:56.247: INFO: Waiting up to 5m0s for pod "client-containers-42e56dad-671d-11ea-811c-0242ac110013" in namespace "e2e-tests-containers-rldbj" to be "success or failure" Mar 16 00:29:56.252: INFO: Pod "client-containers-42e56dad-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 5.376421ms Mar 16 00:29:58.341: INFO: Pod "client-containers-42e56dad-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094167199s Mar 16 00:30:00.347: INFO: Pod "client-containers-42e56dad-671d-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099880739s STEP: Saw pod success Mar 16 00:30:00.347: INFO: Pod "client-containers-42e56dad-671d-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:30:00.349: INFO: Trying to get logs from node hunter-worker pod client-containers-42e56dad-671d-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:30:00.374: INFO: Waiting for pod client-containers-42e56dad-671d-11ea-811c-0242ac110013 to disappear Mar 16 00:30:00.390: INFO: Pod client-containers-42e56dad-671d-11ea-811c-0242ac110013 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:30:00.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-rldbj" for this suite. Mar 16 00:30:06.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:30:06.619: INFO: namespace: e2e-tests-containers-rldbj, resource: bindings, ignored listing per whitelist Mar 16 00:30:06.621: INFO: namespace e2e-tests-containers-rldbj deletion completed in 6.227693593s • [SLOW TEST:10.482 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:30:06.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0316 00:30:16.755668 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 00:30:16.755: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:30:16.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vrft8" for this suite. Mar 16 00:30:22.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:30:22.813: INFO: namespace: e2e-tests-gc-vrft8, resource: bindings, ignored listing per whitelist Mar 16 00:30:22.867: INFO: namespace e2e-tests-gc-vrft8 deletion completed in 6.108292347s • [SLOW TEST:16.246 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:30:22.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 16 00:30:23.025: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 16 00:30:23.031: INFO: Number of nodes with available pods: 0 Mar 16 00:30:23.031: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 16 00:30:23.162: INFO: Number of nodes with available pods: 0 Mar 16 00:30:23.162: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:24.166: INFO: Number of nodes with available pods: 0 Mar 16 00:30:24.166: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:25.166: INFO: Number of nodes with available pods: 0 Mar 16 00:30:25.167: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:26.167: INFO: Number of nodes with available pods: 0 Mar 16 00:30:26.167: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:27.167: INFO: Number of nodes with available pods: 1 Mar 16 00:30:27.167: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 16 00:30:27.198: INFO: Number of nodes with available pods: 1 Mar 16 00:30:27.198: INFO: Number of running nodes: 0, number of available pods: 1 Mar 16 00:30:28.201: INFO: Number of nodes with available pods: 0 Mar 16 00:30:28.201: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 16 00:30:28.215: INFO: Number of nodes with available pods: 0 Mar 16 00:30:28.215: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:29.224: INFO: Number of nodes with available pods: 0 Mar 16 00:30:29.224: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:30.218: INFO: Number of nodes with available pods: 0 Mar 16 00:30:30.218: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:31.218: INFO: Number of nodes with available pods: 0 Mar 16 00:30:31.218: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:32.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:32.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:33.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:33.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:34.220: INFO: Number of nodes with available pods: 0 Mar 16 00:30:34.220: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:35.306: INFO: Number of nodes with available pods: 0 Mar 16 00:30:35.306: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:36.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:36.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:37.218: INFO: Number of nodes with available pods: 0 Mar 16 00:30:37.218: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:38.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:38.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:39.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:39.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:40.220: INFO: Number of nodes with available pods: 0 Mar 16 00:30:40.220: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:41.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:41.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:42.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:42.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:43.219: INFO: Number of nodes with available pods: 0 Mar 16 00:30:43.219: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:44.218: INFO: Number of nodes with available pods: 0 Mar 16 00:30:44.218: INFO: Node hunter-worker is running more than one daemon pod Mar 16 00:30:45.219: INFO: Number of nodes with available pods: 1 Mar 16 00:30:45.219: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-t2wvm, will wait for the garbage collector to delete the pods Mar 16 00:30:45.331: INFO: Deleting DaemonSet.extensions daemon-set took: 54.052676ms Mar 16 00:30:45.431: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.327863ms Mar 16 00:30:51.864: INFO: Number of nodes with available pods: 0 Mar 16 00:30:51.864: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 00:30:51.867: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t2wvm/daemonsets","resourceVersion":"62994"},"items":null} Mar 16 00:30:51.871: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t2wvm/pods","resourceVersion":"62994"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:30:51.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-t2wvm" for this suite. Mar 16 00:30:58.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:30:58.040: INFO: namespace: e2e-tests-daemonsets-t2wvm, resource: bindings, ignored listing per whitelist Mar 16 00:30:58.101: INFO: namespace e2e-tests-daemonsets-t2wvm deletion completed in 6.119740887s • [SLOW TEST:35.233 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:30:58.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:31:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-k6fvx" for this suite. Mar 16 00:31:27.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:31:27.355: INFO: namespace: e2e-tests-replication-controller-k6fvx, resource: bindings, ignored listing per whitelist Mar 16 00:31:27.357: INFO: namespace e2e-tests-replication-controller-k6fvx deletion completed in 22.102587577s • [SLOW TEST:29.257 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:31:27.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 00:31:27.454: INFO: Waiting up to 5m0s for pod "pod-7944a24c-671d-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-n754v" to be "success or failure" Mar 16 00:31:27.470: INFO: Pod "pod-7944a24c-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 16.320699ms Mar 16 00:31:29.474: INFO: Pod "pod-7944a24c-671d-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019879928s Mar 16 00:31:31.478: INFO: Pod "pod-7944a24c-671d-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023980409s STEP: Saw pod success Mar 16 00:31:31.478: INFO: Pod "pod-7944a24c-671d-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:31:31.481: INFO: Trying to get logs from node hunter-worker pod pod-7944a24c-671d-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:31:31.501: INFO: Waiting for pod pod-7944a24c-671d-11ea-811c-0242ac110013 to disappear Mar 16 00:31:31.506: INFO: Pod pod-7944a24c-671d-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:31:31.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n754v" for this suite. Mar 16 00:31:37.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:31:37.564: INFO: namespace: e2e-tests-emptydir-n754v, resource: bindings, ignored listing per whitelist Mar 16 00:31:37.633: INFO: namespace e2e-tests-emptydir-n754v deletion completed in 6.124044693s • [SLOW TEST:10.275 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:31:37.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8qv2n [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 16 00:31:37.744: INFO: Found 0 stateful pods, waiting for 3 Mar 16 00:31:47.749: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 00:31:47.749: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 00:31:47.749: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 16 00:31:57.748: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 00:31:57.748: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 00:31:57.748: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 16 00:31:57.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8qv2n ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 00:31:58.448: INFO: stderr: "I0316 00:31:57.879839 3737 log.go:172] (0xc000138790) (0xc0005f34a0) Create stream\nI0316 00:31:57.879896 3737 log.go:172] (0xc000138790) (0xc0005f34a0) Stream added, broadcasting: 1\nI0316 00:31:57.882730 3737 log.go:172] (0xc000138790) Reply frame received for 1\nI0316 00:31:57.882780 3737 log.go:172] (0xc000138790) (0xc000818000) Create stream\nI0316 00:31:57.882801 3737 log.go:172] (0xc000138790) (0xc000818000) Stream added, broadcasting: 3\nI0316 00:31:57.883915 3737 log.go:172] (0xc000138790) Reply frame received for 3\nI0316 00:31:57.883994 3737 log.go:172] (0xc000138790) (0xc000026000) Create stream\nI0316 00:31:57.884022 3737 log.go:172] (0xc000138790) (0xc000026000) Stream added, broadcasting: 5\nI0316 00:31:57.884889 3737 log.go:172] (0xc000138790) Reply frame received for 5\nI0316 00:31:58.442610 3737 log.go:172] (0xc000138790) Data frame received for 3\nI0316 00:31:58.442644 3737 log.go:172] (0xc000818000) (3) Data frame handling\nI0316 00:31:58.442665 3737 log.go:172] (0xc000818000) (3) Data frame sent\nI0316 00:31:58.442673 3737 log.go:172] (0xc000138790) Data frame received for 3\nI0316 00:31:58.442680 3737 log.go:172] (0xc000818000) (3) Data frame handling\nI0316 00:31:58.442787 3737 log.go:172] (0xc000138790) Data frame received for 5\nI0316 00:31:58.442810 3737 log.go:172] (0xc000026000) (5) Data frame handling\nI0316 00:31:58.444776 3737 log.go:172] (0xc000138790) Data frame received for 1\nI0316 00:31:58.444799 3737 log.go:172] (0xc0005f34a0) (1) Data frame handling\nI0316 00:31:58.444819 3737 log.go:172] (0xc0005f34a0) (1) Data frame sent\nI0316 00:31:58.444841 3737 log.go:172] (0xc000138790) (0xc0005f34a0) Stream removed, broadcasting: 1\nI0316 00:31:58.444863 3737 log.go:172] (0xc000138790) Go away received\nI0316 00:31:58.445305 3737 log.go:172] (0xc000138790) (0xc0005f34a0) Stream removed, broadcasting: 1\nI0316 00:31:58.445328 3737 log.go:172] (0xc000138790) (0xc000818000) Stream removed, broadcasting: 3\nI0316 00:31:58.445340 3737 log.go:172] (0xc000138790) (0xc000026000) Stream removed, broadcasting: 5\n" Mar 16 00:31:58.449: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 00:31:58.449: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 16 00:32:08.481: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 16 00:32:18.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8qv2n ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:32:18.860: INFO: stderr: "I0316 00:32:18.776649 3759 log.go:172] (0xc00015c840) (0xc00072e640) Create stream\nI0316 00:32:18.776689 3759 log.go:172] (0xc00015c840) (0xc00072e640) Stream added, broadcasting: 1\nI0316 00:32:18.778556 3759 log.go:172] (0xc00015c840) Reply frame received for 1\nI0316 00:32:18.778593 3759 log.go:172] (0xc00015c840) (0xc0005febe0) Create stream\nI0316 00:32:18.778604 3759 log.go:172] (0xc00015c840) (0xc0005febe0) Stream added, broadcasting: 3\nI0316 00:32:18.779278 3759 log.go:172] (0xc00015c840) Reply frame received for 3\nI0316 00:32:18.779309 3759 log.go:172] (0xc00015c840) (0xc0005c6000) Create stream\nI0316 00:32:18.779321 3759 log.go:172] (0xc00015c840) (0xc0005c6000) Stream added, broadcasting: 5\nI0316 00:32:18.780032 3759 log.go:172] (0xc00015c840) Reply frame received for 5\nI0316 00:32:18.857054 3759 log.go:172] (0xc00015c840) Data frame received for 5\nI0316 00:32:18.857081 3759 log.go:172] (0xc0005c6000) (5) Data frame handling\nI0316 00:32:18.857235 3759 log.go:172] (0xc00015c840) Data frame received for 3\nI0316 00:32:18.857246 3759 log.go:172] (0xc0005febe0) (3) Data frame handling\nI0316 00:32:18.857268 3759 log.go:172] (0xc0005febe0) (3) Data frame sent\nI0316 00:32:18.857278 3759 log.go:172] (0xc00015c840) Data frame received for 3\nI0316 00:32:18.857285 3759 log.go:172] (0xc0005febe0) (3) Data frame handling\nI0316 00:32:18.858581 3759 log.go:172] (0xc00015c840) Data frame received for 1\nI0316 00:32:18.858598 3759 log.go:172] (0xc00072e640) (1) Data frame handling\nI0316 00:32:18.858605 3759 log.go:172] (0xc00072e640) (1) Data frame sent\nI0316 00:32:18.858618 3759 log.go:172] (0xc00015c840) (0xc00072e640) Stream removed, broadcasting: 1\nI0316 00:32:18.858631 3759 log.go:172] (0xc00015c840) Go away received\nI0316 00:32:18.858790 3759 log.go:172] (0xc00015c840) (0xc00072e640) Stream removed, broadcasting: 1\nI0316 00:32:18.858804 3759 log.go:172] (0xc00015c840) (0xc0005febe0) Stream removed, broadcasting: 3\nI0316 00:32:18.858813 3759 log.go:172] (0xc00015c840) (0xc0005c6000) Stream removed, broadcasting: 5\n" Mar 16 00:32:18.860: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 00:32:18.860: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 00:32:28.876: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:32:28.876: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 00:32:28.876: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 00:32:28.876: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 00:32:38.883: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:32:38.883: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 00:32:38.883: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 16 00:32:48.885: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:32:48.885: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 16 00:32:58.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8qv2n ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 16 00:32:59.115: INFO: stderr: "I0316 00:32:59.000477 3781 log.go:172] (0xc0008aa370) (0xc000600640) Create stream\nI0316 00:32:59.000541 3781 log.go:172] (0xc0008aa370) (0xc000600640) Stream added, broadcasting: 1\nI0316 00:32:59.003032 3781 log.go:172] (0xc0008aa370) Reply frame received for 1\nI0316 00:32:59.003089 3781 log.go:172] (0xc0008aa370) (0xc0007f2000) Create stream\nI0316 00:32:59.003106 3781 log.go:172] (0xc0008aa370) (0xc0007f2000) Stream added, broadcasting: 3\nI0316 00:32:59.004022 3781 log.go:172] (0xc0008aa370) Reply frame received for 3\nI0316 00:32:59.004073 3781 log.go:172] (0xc0008aa370) (0xc00080e000) Create stream\nI0316 00:32:59.004097 3781 log.go:172] (0xc0008aa370) (0xc00080e000) Stream added, broadcasting: 5\nI0316 00:32:59.005058 3781 log.go:172] (0xc0008aa370) Reply frame received for 5\nI0316 00:32:59.109715 3781 log.go:172] (0xc0008aa370) Data frame received for 5\nI0316 00:32:59.109737 3781 log.go:172] (0xc00080e000) (5) Data frame handling\nI0316 00:32:59.109777 3781 log.go:172] (0xc0008aa370) Data frame received for 3\nI0316 00:32:59.109806 3781 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0316 00:32:59.109819 3781 log.go:172] (0xc0007f2000) (3) Data frame sent\nI0316 00:32:59.110181 3781 log.go:172] (0xc0008aa370) Data frame received for 3\nI0316 00:32:59.110196 3781 log.go:172] (0xc0007f2000) (3) Data frame handling\nI0316 00:32:59.112182 3781 log.go:172] (0xc0008aa370) Data frame received for 1\nI0316 00:32:59.112233 3781 log.go:172] (0xc000600640) (1) Data frame handling\nI0316 00:32:59.112258 3781 log.go:172] (0xc000600640) (1) Data frame sent\nI0316 00:32:59.112279 3781 log.go:172] (0xc0008aa370) (0xc000600640) Stream removed, broadcasting: 1\nI0316 00:32:59.112307 3781 log.go:172] (0xc0008aa370) Go away received\nI0316 00:32:59.112531 3781 log.go:172] (0xc0008aa370) (0xc000600640) Stream removed, broadcasting: 1\nI0316 00:32:59.112546 3781 log.go:172] (0xc0008aa370) (0xc0007f2000) Stream removed, broadcasting: 3\nI0316 00:32:59.112562 3781 log.go:172] (0xc0008aa370) (0xc00080e000) Stream removed, broadcasting: 5\n" Mar 16 00:32:59.115: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 16 00:32:59.115: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 16 00:33:09.147: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 16 00:33:19.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8qv2n ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 16 00:33:19.403: INFO: stderr: "I0316 00:33:19.298077 3802 log.go:172] (0xc0008842c0) (0xc00062b360) Create stream\nI0316 00:33:19.298144 3802 log.go:172] (0xc0008842c0) (0xc00062b360) Stream added, broadcasting: 1\nI0316 00:33:19.300585 3802 log.go:172] (0xc0008842c0) Reply frame received for 1\nI0316 00:33:19.300639 3802 log.go:172] (0xc0008842c0) (0xc000488000) Create stream\nI0316 00:33:19.300655 3802 log.go:172] (0xc0008842c0) (0xc000488000) Stream added, broadcasting: 3\nI0316 00:33:19.301874 3802 log.go:172] (0xc0008842c0) Reply frame received for 3\nI0316 00:33:19.301923 3802 log.go:172] (0xc0008842c0) (0xc00062b400) Create stream\nI0316 00:33:19.301948 3802 log.go:172] (0xc0008842c0) (0xc00062b400) Stream added, broadcasting: 5\nI0316 00:33:19.302966 3802 log.go:172] (0xc0008842c0) Reply frame received for 5\nI0316 00:33:19.397761 3802 log.go:172] (0xc0008842c0) Data frame received for 5\nI0316 00:33:19.397812 3802 log.go:172] (0xc00062b400) (5) Data frame handling\nI0316 00:33:19.397847 3802 log.go:172] (0xc0008842c0) Data frame received for 3\nI0316 00:33:19.397860 3802 log.go:172] (0xc000488000) (3) Data frame handling\nI0316 00:33:19.397880 3802 log.go:172] (0xc000488000) (3) Data frame sent\nI0316 00:33:19.397914 3802 log.go:172] (0xc0008842c0) Data frame received for 3\nI0316 00:33:19.397924 3802 log.go:172] (0xc000488000) (3) Data frame handling\nI0316 00:33:19.399167 3802 log.go:172] (0xc0008842c0) Data frame received for 1\nI0316 00:33:19.399196 3802 log.go:172] (0xc00062b360) (1) Data frame handling\nI0316 00:33:19.399212 3802 log.go:172] (0xc00062b360) (1) Data frame sent\nI0316 00:33:19.399236 3802 log.go:172] (0xc0008842c0) (0xc00062b360) Stream removed, broadcasting: 1\nI0316 00:33:19.399254 3802 log.go:172] (0xc0008842c0) Go away received\nI0316 00:33:19.399501 3802 log.go:172] (0xc0008842c0) (0xc00062b360) Stream removed, broadcasting: 1\nI0316 00:33:19.399525 3802 log.go:172] (0xc0008842c0) (0xc000488000) Stream removed, broadcasting: 3\nI0316 00:33:19.399542 3802 log.go:172] (0xc0008842c0) (0xc00062b400) Stream removed, broadcasting: 5\n" Mar 16 00:33:19.403: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 16 00:33:19.403: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 16 00:33:29.424: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:33:29.424: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 16 00:33:29.424: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 16 00:33:29.424: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 16 00:33:39.430: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:33:39.430: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 16 00:33:49.431: INFO: Waiting for StatefulSet e2e-tests-statefulset-8qv2n/ss2 to complete update Mar 16 00:33:49.431: INFO: Waiting for Pod e2e-tests-statefulset-8qv2n/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 16 00:33:59.432: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8qv2n Mar 16 00:33:59.434: INFO: Scaling statefulset ss2 to 0 Mar 16 00:34:29.447: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 00:34:29.451: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:34:29.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8qv2n" for this suite. Mar 16 00:34:35.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:34:35.518: INFO: namespace: e2e-tests-statefulset-8qv2n, resource: bindings, ignored listing per whitelist Mar 16 00:34:35.564: INFO: namespace e2e-tests-statefulset-8qv2n deletion completed in 6.090962014s • [SLOW TEST:177.931 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:34:35.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 00:34:43.720: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 00:34:43.725: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 00:34:45.725: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 00:34:45.730: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 00:34:47.725: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 00:34:47.729: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 00:34:49.725: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 00:34:49.729: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 00:34:51.725: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 00:34:51.729: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:34:51.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-drbj2" for this suite. Mar 16 00:35:13.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:35:13.780: INFO: namespace: e2e-tests-container-lifecycle-hook-drbj2, resource: bindings, ignored listing per whitelist Mar 16 00:35:13.845: INFO: namespace e2e-tests-container-lifecycle-hook-drbj2 deletion completed in 22.111536129s • [SLOW TEST:38.281 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:35:13.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 16 00:35:13.953: INFO: Waiting up to 5m0s for pod "downward-api-0043efbe-671e-11ea-811c-0242ac110013" in namespace "e2e-tests-downward-api-n98xh" to be "success or failure" Mar 16 00:35:14.004: INFO: Pod "downward-api-0043efbe-671e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 50.770529ms Mar 16 00:35:16.088: INFO: Pod "downward-api-0043efbe-671e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134229854s Mar 16 00:35:18.094: INFO: Pod "downward-api-0043efbe-671e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140437282s STEP: Saw pod success Mar 16 00:35:18.094: INFO: Pod "downward-api-0043efbe-671e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:35:18.096: INFO: Trying to get logs from node hunter-worker pod downward-api-0043efbe-671e-11ea-811c-0242ac110013 container dapi-container: STEP: delete the pod Mar 16 00:35:18.116: INFO: Waiting for pod downward-api-0043efbe-671e-11ea-811c-0242ac110013 to disappear Mar 16 00:35:18.120: INFO: Pod downward-api-0043efbe-671e-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:35:18.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n98xh" for this suite. Mar 16 00:35:24.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:35:24.260: INFO: namespace: e2e-tests-downward-api-n98xh, resource: bindings, ignored listing per whitelist Mar 16 00:35:24.290: INFO: namespace e2e-tests-downward-api-n98xh deletion completed in 6.163353434s • [SLOW TEST:10.445 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 16 00:35:24.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 00:35:24.404: INFO: Waiting up to 5m0s for pod "pod-067fc2ce-671e-11ea-811c-0242ac110013" in namespace "e2e-tests-emptydir-mc5n7" to be "success or failure" Mar 16 00:35:24.408: INFO: Pod "pod-067fc2ce-671e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971965ms Mar 16 00:35:26.412: INFO: Pod "pod-067fc2ce-671e-11ea-811c-0242ac110013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454058s Mar 16 00:35:28.416: INFO: Pod "pod-067fc2ce-671e-11ea-811c-0242ac110013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012051562s STEP: Saw pod success Mar 16 00:35:28.416: INFO: Pod "pod-067fc2ce-671e-11ea-811c-0242ac110013" satisfied condition "success or failure" Mar 16 00:35:28.420: INFO: Trying to get logs from node hunter-worker2 pod pod-067fc2ce-671e-11ea-811c-0242ac110013 container test-container: STEP: delete the pod Mar 16 00:35:28.452: INFO: Waiting for pod pod-067fc2ce-671e-11ea-811c-0242ac110013 to disappear Mar 16 00:35:28.462: INFO: Pod pod-067fc2ce-671e-11ea-811c-0242ac110013 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 16 00:35:28.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mc5n7" for this suite. Mar 16 00:35:34.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 16 00:35:34.553: INFO: namespace: e2e-tests-emptydir-mc5n7, resource: bindings, ignored listing per whitelist Mar 16 00:35:34.555: INFO: namespace e2e-tests-emptydir-mc5n7 deletion completed in 6.089519956s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSMar 16 00:35:34.555: INFO: Running AfterSuite actions on all nodes Mar 16 00:35:34.555: INFO: Running AfterSuite actions on node 1 Mar 16 00:35:34.555: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6717.007 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS