I0501 10:46:53.799852 6 e2e.go:224] Starting e2e run "11cd58fa-8b99-11ea-88a3-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588330013 - Will randomize all specs Will run 201 of 2164 specs May 1 10:46:53.985: INFO: >>> kubeConfig: /root/.kube/config May 1 10:46:53.989: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 1 10:46:54.011: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 1 10:46:54.049: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 1 10:46:54.049: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 1 10:46:54.049: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 1 10:46:54.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 1 10:46:54.079: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 1 10:46:54.079: INFO: e2e test version: v1.13.12 May 1 10:46:54.080: INFO: kube-apiserver version: v1.13.12 SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:46:54.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 1 10:46:54.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vkzrk May 1 10:46:58.226: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vkzrk STEP: checking the pod's current state and verifying that restartCount is present May 1 10:46:58.228: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:50:59.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vkzrk" for this suite. May 1 10:51:05.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:51:05.322: INFO: namespace: e2e-tests-container-probe-vkzrk, resource: bindings, ignored listing per whitelist May 1 10:51:05.370: INFO: namespace e2e-tests-container-probe-vkzrk deletion completed in 6.086516277s • [SLOW TEST:251.289 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:51:05.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-dqrsf May 1 10:51:09.500: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-dqrsf STEP: checking the pod's current state and verifying that restartCount is present May 1 10:51:09.503: INFO: Initial restart count of pod liveness-http is 0 May 1 10:51:33.733: INFO: Restart count of pod e2e-tests-container-probe-dqrsf/liveness-http is now 1 (24.229734821s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:51:33.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dqrsf" for this suite. May 1 10:51:39.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:51:39.852: INFO: namespace: e2e-tests-container-probe-dqrsf, resource: bindings, ignored listing per whitelist May 1 10:51:39.858: INFO: namespace e2e-tests-container-probe-dqrsf deletion completed in 6.088880859s • [SLOW TEST:34.488 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:51:39.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 10:51:44.045: INFO: Waiting up to 5m0s for pod "client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017" in namespace "e2e-tests-pods-xt86w" to be "success or failure" May 1 10:51:44.077: INFO: Pod "client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.731839ms May 1 10:51:46.082: INFO: Pod "client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037128798s May 1 10:51:48.085: INFO: Pod "client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040503237s STEP: Saw pod success May 1 10:51:48.085: INFO: Pod "client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 10:51:48.087: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017 container env3cont: STEP: delete the pod May 1 10:51:48.109: INFO: Waiting for pod client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017 to disappear May 1 10:51:48.138: INFO: Pod client-envvars-bf1614eb-8b99-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:51:48.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xt86w" for this suite. May 1 10:52:34.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:52:34.211: INFO: namespace: e2e-tests-pods-xt86w, resource: bindings, ignored listing per whitelist May 1 10:52:34.224: INFO: namespace e2e-tests-pods-xt86w deletion completed in 46.082727364s • [SLOW TEST:54.366 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:52:34.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dd13973d-8b99-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 10:52:34.434: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-2fqjc" to be "success or failure" May 1 10:52:34.482: INFO: Pod "pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 48.565701ms May 1 10:52:36.805: INFO: Pod "pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371888717s May 1 10:52:38.809: INFO: Pod "pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.375891219s STEP: Saw pod success May 1 10:52:38.809: INFO: Pod "pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 10:52:38.812: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 10:52:38.835: INFO: Waiting for pod pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017 to disappear May 1 10:52:38.839: INFO: Pod pod-projected-configmaps-dd1d3aac-8b99-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:52:38.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2fqjc" for this suite. May 1 10:52:44.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:52:44.907: INFO: namespace: e2e-tests-projected-2fqjc, resource: bindings, ignored listing per whitelist May 1 10:52:44.931: INFO: namespace e2e-tests-projected-2fqjc deletion completed in 6.089791759s • [SLOW TEST:10.706 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:52:44.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 1 10:52:45.024: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149636,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 10:52:45.024: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149636,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 1 10:52:55.032: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149656,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 10:52:55.032: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149656,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 1 10:53:05.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149676,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 10:53:05.040: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149676,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 1 10:53:15.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149696,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 10:53:15.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-a,UID:e36c3838-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149696,Generation:0,CreationTimestamp:2020-05-01 10:52:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 1 10:53:25.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-b,UID:fb4b4dc2-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149716,Generation:0,CreationTimestamp:2020-05-01 10:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 10:53:25.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-b,UID:fb4b4dc2-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149716,Generation:0,CreationTimestamp:2020-05-01 10:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 1 10:53:35.062: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-b,UID:fb4b4dc2-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149736,Generation:0,CreationTimestamp:2020-05-01 10:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 10:53:35.062: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8pf6q,SelfLink:/api/v1/namespaces/e2e-tests-watch-8pf6q/configmaps/e2e-watch-test-configmap-b,UID:fb4b4dc2-8b99-11ea-99e8-0242ac110002,ResourceVersion:8149736,Generation:0,CreationTimestamp:2020-05-01 10:53:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:53:45.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8pf6q" for this suite. May 1 10:53:51.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:53:51.099: INFO: namespace: e2e-tests-watch-8pf6q, resource: bindings, ignored listing per whitelist May 1 10:53:51.162: INFO: namespace e2e-tests-watch-8pf6q deletion completed in 6.095429795s • [SLOW TEST:66.231 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:53:51.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-sjnzw [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-sjnzw STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-sjnzw May 1 10:53:51.337: INFO: Found 0 stateful pods, waiting for 1 May 1 10:54:01.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 1 10:54:01.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 10:54:01.637: INFO: stderr: "I0501 10:54:01.501912 39 log.go:172] (0xc0001386e0) (0xc0005dd2c0) Create stream\nI0501 10:54:01.502004 39 log.go:172] (0xc0001386e0) (0xc0005dd2c0) Stream added, broadcasting: 1\nI0501 10:54:01.504427 39 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0501 10:54:01.504465 39 log.go:172] (0xc0001386e0) (0xc0005dd360) Create stream\nI0501 10:54:01.504476 39 log.go:172] (0xc0001386e0) (0xc0005dd360) Stream added, broadcasting: 3\nI0501 10:54:01.505458 39 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0501 10:54:01.505491 39 log.go:172] (0xc0001386e0) (0xc0005dd400) Create stream\nI0501 10:54:01.505503 39 log.go:172] (0xc0001386e0) (0xc0005dd400) Stream added, broadcasting: 5\nI0501 10:54:01.506135 39 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0501 10:54:01.629864 39 log.go:172] (0xc0001386e0) Data frame received for 3\nI0501 10:54:01.629892 39 log.go:172] (0xc0005dd360) (3) Data frame handling\nI0501 10:54:01.629905 39 log.go:172] (0xc0005dd360) (3) Data frame sent\nI0501 10:54:01.630182 39 log.go:172] (0xc0001386e0) Data frame received for 5\nI0501 10:54:01.630213 39 log.go:172] (0xc0001386e0) Data frame received for 3\nI0501 10:54:01.630246 39 log.go:172] (0xc0005dd360) (3) Data frame handling\nI0501 10:54:01.630285 39 log.go:172] (0xc0005dd400) (5) Data frame handling\nI0501 10:54:01.632399 39 log.go:172] (0xc0001386e0) Data frame received for 1\nI0501 10:54:01.632431 39 log.go:172] (0xc0005dd2c0) (1) Data frame handling\nI0501 10:54:01.632460 39 log.go:172] (0xc0005dd2c0) (1) Data frame sent\nI0501 10:54:01.632582 39 log.go:172] (0xc0001386e0) (0xc0005dd2c0) Stream removed, broadcasting: 1\nI0501 10:54:01.632621 39 log.go:172] (0xc0001386e0) Go away received\nI0501 10:54:01.632760 39 log.go:172] (0xc0001386e0) (0xc0005dd2c0) Stream removed, broadcasting: 1\nI0501 10:54:01.632774 39 log.go:172] (0xc0001386e0) (0xc0005dd360) Stream removed, broadcasting: 3\nI0501 10:54:01.632780 39 log.go:172] (0xc0001386e0) (0xc0005dd400) Stream removed, broadcasting: 5\n" May 1 10:54:01.637: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 10:54:01.637: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 10:54:01.640: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 10:54:11.645: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 10:54:11.645: INFO: Waiting for statefulset status.replicas updated to 0 May 1 10:54:11.656: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:11.656: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:11.656: INFO: May 1 10:54:11.656: INFO: StatefulSet ss has not reached scale 3, at 1 May 1 10:54:12.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997240501s May 1 10:54:13.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992320912s May 1 10:54:14.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.91103787s May 1 10:54:15.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.851697312s May 1 10:54:16.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.846926293s May 1 10:54:17.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.842229919s May 1 10:54:19.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.837532856s May 1 10:54:20.328: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.343780848s May 1 10:54:21.431: INFO: Verifying statefulset ss doesn't scale past 3 for another 324.810248ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-sjnzw May 1 10:54:22.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 10:54:22.704: INFO: stderr: "I0501 10:54:22.594635 61 log.go:172] (0xc0006c8420) (0xc0005132c0) Create stream\nI0501 10:54:22.594704 61 log.go:172] (0xc0006c8420) (0xc0005132c0) Stream added, broadcasting: 1\nI0501 10:54:22.597472 61 log.go:172] (0xc0006c8420) Reply frame received for 1\nI0501 10:54:22.597533 61 log.go:172] (0xc0006c8420) (0xc000734000) Create stream\nI0501 10:54:22.597557 61 log.go:172] (0xc0006c8420) (0xc000734000) Stream added, broadcasting: 3\nI0501 10:54:22.598481 61 log.go:172] (0xc0006c8420) Reply frame received for 3\nI0501 10:54:22.598547 61 log.go:172] (0xc0006c8420) (0xc000114000) Create stream\nI0501 10:54:22.598570 61 log.go:172] (0xc0006c8420) (0xc000114000) Stream added, broadcasting: 5\nI0501 10:54:22.599267 61 log.go:172] (0xc0006c8420) Reply frame received for 5\nI0501 10:54:22.698004 61 log.go:172] (0xc0006c8420) Data frame received for 3\nI0501 10:54:22.698069 61 log.go:172] (0xc000734000) (3) Data frame handling\nI0501 10:54:22.698086 61 log.go:172] (0xc000734000) (3) Data frame sent\nI0501 10:54:22.698137 61 log.go:172] (0xc0006c8420) Data frame received for 5\nI0501 10:54:22.698162 61 log.go:172] (0xc000114000) (5) Data frame handling\nI0501 10:54:22.698290 61 log.go:172] (0xc0006c8420) Data frame received for 3\nI0501 10:54:22.698306 61 log.go:172] (0xc000734000) (3) Data frame handling\nI0501 10:54:22.699835 61 log.go:172] (0xc0006c8420) Data frame received for 1\nI0501 10:54:22.699897 61 log.go:172] (0xc0005132c0) (1) Data frame handling\nI0501 10:54:22.699919 61 log.go:172] (0xc0005132c0) (1) Data frame sent\nI0501 10:54:22.699951 61 log.go:172] (0xc0006c8420) (0xc0005132c0) Stream removed, broadcasting: 1\nI0501 10:54:22.699970 61 log.go:172] (0xc0006c8420) Go away received\nI0501 10:54:22.700187 61 log.go:172] (0xc0006c8420) (0xc0005132c0) Stream removed, broadcasting: 1\nI0501 10:54:22.700221 61 log.go:172] (0xc0006c8420) (0xc000734000) Stream removed, broadcasting: 3\nI0501 10:54:22.700240 61 log.go:172] (0xc0006c8420) (0xc000114000) Stream removed, broadcasting: 5\n" May 1 10:54:22.704: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 10:54:22.704: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 10:54:22.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 10:54:22.896: INFO: stderr: "I0501 10:54:22.829892 83 log.go:172] (0xc000148630) (0xc0006a9360) Create stream\nI0501 10:54:22.829955 83 log.go:172] (0xc000148630) (0xc0006a9360) Stream added, broadcasting: 1\nI0501 10:54:22.832948 83 log.go:172] (0xc000148630) Reply frame received for 1\nI0501 10:54:22.832995 83 log.go:172] (0xc000148630) (0xc0002bc000) Create stream\nI0501 10:54:22.833013 83 log.go:172] (0xc000148630) (0xc0002bc000) Stream added, broadcasting: 3\nI0501 10:54:22.834372 83 log.go:172] (0xc000148630) Reply frame received for 3\nI0501 10:54:22.834407 83 log.go:172] (0xc000148630) (0xc0006a9400) Create stream\nI0501 10:54:22.834419 83 log.go:172] (0xc000148630) (0xc0006a9400) Stream added, broadcasting: 5\nI0501 10:54:22.835430 83 log.go:172] (0xc000148630) Reply frame received for 5\nI0501 10:54:22.890751 83 log.go:172] (0xc000148630) Data frame received for 5\nI0501 10:54:22.890812 83 log.go:172] (0xc0006a9400) (5) Data frame handling\nI0501 10:54:22.890828 83 log.go:172] (0xc0006a9400) (5) Data frame sent\nI0501 10:54:22.890844 83 log.go:172] (0xc000148630) Data frame received for 5\nI0501 10:54:22.890866 83 log.go:172] (0xc0006a9400) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0501 10:54:22.890910 83 log.go:172] (0xc000148630) Data frame received for 3\nI0501 10:54:22.890950 83 log.go:172] (0xc0002bc000) (3) Data frame handling\nI0501 10:54:22.890965 83 log.go:172] (0xc0002bc000) (3) Data frame sent\nI0501 10:54:22.890975 83 log.go:172] (0xc000148630) Data frame received for 3\nI0501 10:54:22.890984 83 log.go:172] (0xc0002bc000) (3) Data frame handling\nI0501 10:54:22.892179 83 log.go:172] (0xc000148630) Data frame received for 1\nI0501 10:54:22.892209 83 log.go:172] (0xc0006a9360) (1) Data frame handling\nI0501 10:54:22.892225 83 log.go:172] (0xc0006a9360) (1) Data frame sent\nI0501 10:54:22.892255 83 log.go:172] (0xc000148630) (0xc0006a9360) Stream removed, broadcasting: 1\nI0501 10:54:22.892314 83 log.go:172] (0xc000148630) Go away received\nI0501 10:54:22.892695 83 log.go:172] (0xc000148630) (0xc0006a9360) Stream removed, broadcasting: 1\nI0501 10:54:22.892714 83 log.go:172] (0xc000148630) (0xc0002bc000) Stream removed, broadcasting: 3\nI0501 10:54:22.892722 83 log.go:172] (0xc000148630) (0xc0006a9400) Stream removed, broadcasting: 5\n" May 1 10:54:22.896: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 10:54:22.896: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 10:54:22.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 10:54:23.074: INFO: stderr: "I0501 10:54:23.018524 106 log.go:172] (0xc00015c840) (0xc000681360) Create stream\nI0501 10:54:23.018579 106 log.go:172] (0xc00015c840) (0xc000681360) Stream added, broadcasting: 1\nI0501 10:54:23.021694 106 log.go:172] (0xc00015c840) Reply frame received for 1\nI0501 10:54:23.021752 106 log.go:172] (0xc00015c840) (0xc000722000) Create stream\nI0501 10:54:23.021776 106 log.go:172] (0xc00015c840) (0xc000722000) Stream added, broadcasting: 3\nI0501 10:54:23.022769 106 log.go:172] (0xc00015c840) Reply frame received for 3\nI0501 10:54:23.022813 106 log.go:172] (0xc00015c840) (0xc0005fc000) Create stream\nI0501 10:54:23.022829 106 log.go:172] (0xc00015c840) (0xc0005fc000) Stream added, broadcasting: 5\nI0501 10:54:23.023855 106 log.go:172] (0xc00015c840) Reply frame received for 5\nI0501 10:54:23.068711 106 log.go:172] (0xc00015c840) Data frame received for 3\nI0501 10:54:23.068751 106 log.go:172] (0xc000722000) (3) Data frame handling\nI0501 10:54:23.068760 106 log.go:172] (0xc000722000) (3) Data frame sent\nI0501 10:54:23.068766 106 log.go:172] (0xc00015c840) Data frame received for 3\nI0501 10:54:23.068773 106 log.go:172] (0xc000722000) (3) Data frame handling\nI0501 10:54:23.068812 106 log.go:172] (0xc00015c840) Data frame received for 5\nI0501 10:54:23.068823 106 log.go:172] (0xc0005fc000) (5) Data frame handling\nI0501 10:54:23.068849 106 log.go:172] (0xc0005fc000) (5) Data frame sent\nI0501 10:54:23.068866 106 log.go:172] (0xc00015c840) Data frame received for 5\nI0501 10:54:23.068884 106 log.go:172] (0xc0005fc000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0501 10:54:23.070372 106 log.go:172] (0xc00015c840) Data frame received for 1\nI0501 10:54:23.070398 106 log.go:172] (0xc000681360) (1) Data frame handling\nI0501 10:54:23.070417 106 log.go:172] (0xc000681360) (1) Data frame sent\nI0501 10:54:23.070438 106 log.go:172] (0xc00015c840) (0xc000681360) Stream removed, broadcasting: 1\nI0501 10:54:23.070462 106 log.go:172] (0xc00015c840) Go away received\nI0501 10:54:23.070651 106 log.go:172] (0xc00015c840) (0xc000681360) Stream removed, broadcasting: 1\nI0501 10:54:23.070673 106 log.go:172] (0xc00015c840) (0xc000722000) Stream removed, broadcasting: 3\nI0501 10:54:23.070686 106 log.go:172] (0xc00015c840) (0xc0005fc000) Stream removed, broadcasting: 5\n" May 1 10:54:23.074: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 10:54:23.074: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 10:54:23.078: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 1 10:54:33.084: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 10:54:33.084: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 10:54:33.084: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 1 10:54:33.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 10:54:33.311: INFO: stderr: "I0501 10:54:33.219484 129 log.go:172] (0xc00075e0b0) (0xc000672280) Create stream\nI0501 10:54:33.219550 129 log.go:172] (0xc00075e0b0) (0xc000672280) Stream added, broadcasting: 1\nI0501 10:54:33.222227 129 log.go:172] (0xc00075e0b0) Reply frame received for 1\nI0501 10:54:33.222270 129 log.go:172] (0xc00075e0b0) (0xc000296aa0) Create stream\nI0501 10:54:33.222282 129 log.go:172] (0xc00075e0b0) (0xc000296aa0) Stream added, broadcasting: 3\nI0501 10:54:33.223388 129 log.go:172] (0xc00075e0b0) Reply frame received for 3\nI0501 10:54:33.223442 129 log.go:172] (0xc00075e0b0) (0xc0007be000) Create stream\nI0501 10:54:33.223459 129 log.go:172] (0xc00075e0b0) (0xc0007be000) Stream added, broadcasting: 5\nI0501 10:54:33.224304 129 log.go:172] (0xc00075e0b0) Reply frame received for 5\nI0501 10:54:33.304724 129 log.go:172] (0xc00075e0b0) Data frame received for 5\nI0501 10:54:33.304760 129 log.go:172] (0xc0007be000) (5) Data frame handling\nI0501 10:54:33.304822 129 log.go:172] (0xc00075e0b0) Data frame received for 3\nI0501 10:54:33.304851 129 log.go:172] (0xc000296aa0) (3) Data frame handling\nI0501 10:54:33.304877 129 log.go:172] (0xc000296aa0) (3) Data frame sent\nI0501 10:54:33.304895 129 log.go:172] (0xc00075e0b0) Data frame received for 3\nI0501 10:54:33.304906 129 log.go:172] (0xc000296aa0) (3) Data frame handling\nI0501 10:54:33.306913 129 log.go:172] (0xc00075e0b0) Data frame received for 1\nI0501 10:54:33.306940 129 log.go:172] (0xc000672280) (1) Data frame handling\nI0501 10:54:33.306981 129 log.go:172] (0xc000672280) (1) Data frame sent\nI0501 10:54:33.306996 129 log.go:172] (0xc00075e0b0) (0xc000672280) Stream removed, broadcasting: 1\nI0501 10:54:33.307017 129 log.go:172] (0xc00075e0b0) Go away received\nI0501 10:54:33.307260 129 log.go:172] (0xc00075e0b0) (0xc000672280) Stream removed, broadcasting: 1\nI0501 10:54:33.307286 129 log.go:172] (0xc00075e0b0) (0xc000296aa0) Stream removed, broadcasting: 3\nI0501 10:54:33.307299 129 log.go:172] (0xc00075e0b0) (0xc0007be000) Stream removed, broadcasting: 5\n" May 1 10:54:33.311: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 10:54:33.311: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 10:54:33.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 10:54:33.599: INFO: stderr: "I0501 10:54:33.498082 151 log.go:172] (0xc00013a580) (0xc0004a61e0) Create stream\nI0501 10:54:33.498142 151 log.go:172] (0xc00013a580) (0xc0004a61e0) Stream added, broadcasting: 1\nI0501 10:54:33.501400 151 log.go:172] (0xc00013a580) Reply frame received for 1\nI0501 10:54:33.501454 151 log.go:172] (0xc00013a580) (0xc000850e60) Create stream\nI0501 10:54:33.501470 151 log.go:172] (0xc00013a580) (0xc000850e60) Stream added, broadcasting: 3\nI0501 10:54:33.502454 151 log.go:172] (0xc00013a580) Reply frame received for 3\nI0501 10:54:33.502481 151 log.go:172] (0xc00013a580) (0xc0004a6320) Create stream\nI0501 10:54:33.502489 151 log.go:172] (0xc00013a580) (0xc0004a6320) Stream added, broadcasting: 5\nI0501 10:54:33.503241 151 log.go:172] (0xc00013a580) Reply frame received for 5\nI0501 10:54:33.592143 151 log.go:172] (0xc00013a580) Data frame received for 3\nI0501 10:54:33.592198 151 log.go:172] (0xc000850e60) (3) Data frame handling\nI0501 10:54:33.592236 151 log.go:172] (0xc000850e60) (3) Data frame sent\nI0501 10:54:33.592290 151 log.go:172] (0xc00013a580) Data frame received for 3\nI0501 10:54:33.592326 151 log.go:172] (0xc000850e60) (3) Data frame handling\nI0501 10:54:33.592741 151 log.go:172] (0xc00013a580) Data frame received for 5\nI0501 10:54:33.592771 151 log.go:172] (0xc0004a6320) (5) Data frame handling\nI0501 10:54:33.594490 151 log.go:172] (0xc00013a580) Data frame received for 1\nI0501 10:54:33.594522 151 log.go:172] (0xc0004a61e0) (1) Data frame handling\nI0501 10:54:33.594545 151 log.go:172] (0xc0004a61e0) (1) Data frame sent\nI0501 10:54:33.594563 151 log.go:172] (0xc00013a580) (0xc0004a61e0) Stream removed, broadcasting: 1\nI0501 10:54:33.594581 151 log.go:172] (0xc00013a580) Go away received\nI0501 10:54:33.594878 151 log.go:172] (0xc00013a580) (0xc0004a61e0) Stream removed, broadcasting: 1\nI0501 10:54:33.594904 151 log.go:172] (0xc00013a580) (0xc000850e60) Stream removed, broadcasting: 3\nI0501 10:54:33.594917 151 log.go:172] (0xc00013a580) (0xc0004a6320) Stream removed, broadcasting: 5\n" May 1 10:54:33.599: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 10:54:33.599: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 10:54:33.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-sjnzw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 10:54:33.821: INFO: stderr: "I0501 10:54:33.725495 174 log.go:172] (0xc000138840) (0xc0002c06e0) Create stream\nI0501 10:54:33.725552 174 log.go:172] (0xc000138840) (0xc0002c06e0) Stream added, broadcasting: 1\nI0501 10:54:33.728414 174 log.go:172] (0xc000138840) Reply frame received for 1\nI0501 10:54:33.728473 174 log.go:172] (0xc000138840) (0xc000312dc0) Create stream\nI0501 10:54:33.728496 174 log.go:172] (0xc000138840) (0xc000312dc0) Stream added, broadcasting: 3\nI0501 10:54:33.729615 174 log.go:172] (0xc000138840) Reply frame received for 3\nI0501 10:54:33.729641 174 log.go:172] (0xc000138840) (0xc0002c0780) Create stream\nI0501 10:54:33.729662 174 log.go:172] (0xc000138840) (0xc0002c0780) Stream added, broadcasting: 5\nI0501 10:54:33.730587 174 log.go:172] (0xc000138840) Reply frame received for 5\nI0501 10:54:33.814721 174 log.go:172] (0xc000138840) Data frame received for 3\nI0501 10:54:33.814775 174 log.go:172] (0xc000312dc0) (3) Data frame handling\nI0501 10:54:33.814808 174 log.go:172] (0xc000138840) Data frame received for 5\nI0501 10:54:33.814834 174 log.go:172] (0xc0002c0780) (5) Data frame handling\nI0501 10:54:33.814860 174 log.go:172] (0xc000312dc0) (3) Data frame sent\nI0501 10:54:33.815025 174 log.go:172] (0xc000138840) Data frame received for 3\nI0501 10:54:33.815054 174 log.go:172] (0xc000312dc0) (3) Data frame handling\nI0501 10:54:33.816721 174 log.go:172] (0xc000138840) Data frame received for 1\nI0501 10:54:33.816756 174 log.go:172] (0xc0002c06e0) (1) Data frame handling\nI0501 10:54:33.816787 174 log.go:172] (0xc0002c06e0) (1) Data frame sent\nI0501 10:54:33.816815 174 log.go:172] (0xc000138840) (0xc0002c06e0) Stream removed, broadcasting: 1\nI0501 10:54:33.816840 174 log.go:172] (0xc000138840) Go away received\nI0501 10:54:33.817311 174 log.go:172] (0xc000138840) (0xc0002c06e0) Stream removed, broadcasting: 1\nI0501 10:54:33.817351 174 log.go:172] (0xc000138840) (0xc000312dc0) Stream removed, broadcasting: 3\nI0501 10:54:33.817364 174 log.go:172] (0xc000138840) (0xc0002c0780) Stream removed, broadcasting: 5\n" May 1 10:54:33.821: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 10:54:33.821: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 10:54:33.821: INFO: Waiting for statefulset status.replicas updated to 0 May 1 10:54:33.825: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 1 10:54:43.848: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 10:54:43.848: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 10:54:43.848: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 10:54:43.858: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:43.858: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:43.858: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:43.858: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:43.858: INFO: May 1 10:54:43.858: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 10:54:44.958: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:44.958: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:44.958: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:44.958: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:44.958: INFO: May 1 10:54:44.958: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 10:54:45.963: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:45.963: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:45.963: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:45.963: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:45.963: INFO: May 1 10:54:45.963: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 10:54:46.968: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:46.968: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:46.968: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:46.969: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:46.969: INFO: May 1 10:54:46.969: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 10:54:47.974: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:47.974: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:47.974: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:47.974: INFO: May 1 10:54:47.974: INFO: StatefulSet ss has not reached scale 0, at 2 May 1 10:54:48.979: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:48.979: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:48.979: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:48.979: INFO: May 1 10:54:48.979: INFO: StatefulSet ss has not reached scale 0, at 2 May 1 10:54:49.983: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:49.983: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:49.983: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:49.984: INFO: May 1 10:54:49.984: INFO: StatefulSet ss has not reached scale 0, at 2 May 1 10:54:50.988: INFO: POD NODE PHASE GRACE CONDITIONS May 1 10:54:50.988: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:53:51 +0000 UTC }] May 1 10:54:50.989: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 10:54:11 +0000 UTC }] May 1 10:54:50.989: INFO: May 1 10:54:50.989: INFO: StatefulSet ss has not reached scale 0, at 2 May 1 10:54:51.992: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.865915007s May 1 10:54:52.996: INFO: Verifying statefulset ss doesn't scale past 0 for another 862.529948ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-sjnzw May 1 10:54:54.000: INFO: Scaling statefulset ss to 0 May 1 10:54:54.009: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 10:54:54.011: INFO: Deleting all statefulset in ns e2e-tests-statefulset-sjnzw May 1 10:54:54.014: INFO: Scaling statefulset ss to 0 May 1 10:54:54.022: INFO: Waiting for statefulset status.replicas updated to 0 May 1 10:54:54.024: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:54:54.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-sjnzw" for this suite. May 1 10:55:00.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:55:00.142: INFO: namespace: e2e-tests-statefulset-sjnzw, resource: bindings, ignored listing per whitelist May 1 10:55:00.167: INFO: namespace e2e-tests-statefulset-sjnzw deletion completed in 6.094643577s • [SLOW TEST:69.005 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:55:00.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 1 10:55:00.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pcll8' May 1 10:55:02.930: INFO: stderr: "" May 1 10:55:02.930: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 10:55:03.975: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:03.975: INFO: Found 0 / 1 May 1 10:55:04.957: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:04.957: INFO: Found 0 / 1 May 1 10:55:05.935: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:05.935: INFO: Found 0 / 1 May 1 10:55:06.935: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:06.935: INFO: Found 1 / 1 May 1 10:55:06.935: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 1 10:55:06.937: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:06.937: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 10:55:06.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tzcm7 --namespace=e2e-tests-kubectl-pcll8 -p {"metadata":{"annotations":{"x":"y"}}}' May 1 10:55:07.049: INFO: stderr: "" May 1 10:55:07.049: INFO: stdout: "pod/redis-master-tzcm7 patched\n" STEP: checking annotations May 1 10:55:07.052: INFO: Selector matched 1 pods for map[app:redis] May 1 10:55:07.053: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:55:07.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pcll8" for this suite. May 1 10:55:29.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:55:29.146: INFO: namespace: e2e-tests-kubectl-pcll8, resource: bindings, ignored listing per whitelist May 1 10:55:29.146: INFO: namespace e2e-tests-kubectl-pcll8 deletion completed in 22.089574807s • [SLOW TEST:28.978 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:55:29.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 10:55:29.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:29.355: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 10:55:29.355: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 1 10:55:29.358: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 1 10:55:29.381: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 1 10:55:29.421: INFO: scanned /root for discovery docs: May 1 10:55:29.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:45.264: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 10:55:45.264: INFO: stdout: "Created e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417\nScaling up e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 1 10:55:45.264: INFO: stdout: "Created e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417\nScaling up e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 1 10:55:45.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:45.440: INFO: stderr: "" May 1 10:55:45.440: INFO: stdout: "e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417-kbgtl " May 1 10:55:45.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417-kbgtl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:45.535: INFO: stderr: "" May 1 10:55:45.535: INFO: stdout: "true" May 1 10:55:45.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417-kbgtl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:46.027: INFO: stderr: "" May 1 10:55:46.027: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 1 10:55:46.028: INFO: e2e-test-nginx-rc-bc342b9dfb8b992c02918095efa86417-kbgtl is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 1 10:55:46.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-bkz6k' May 1 10:55:46.239: INFO: stderr: "" May 1 10:55:46.239: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:55:46.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bkz6k" for this suite. May 1 10:55:52.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:55:53.009: INFO: namespace: e2e-tests-kubectl-bkz6k, resource: bindings, ignored listing per whitelist May 1 10:55:53.025: INFO: namespace e2e-tests-kubectl-bkz6k deletion completed in 6.72265743s • [SLOW TEST:23.879 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:55:53.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 10:55:53.156: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:55:54.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-5cz7b" for this suite. May 1 10:56:00.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:56:00.322: INFO: namespace: e2e-tests-custom-resource-definition-5cz7b, resource: bindings, ignored listing per whitelist May 1 10:56:00.360: INFO: namespace e2e-tests-custom-resource-definition-5cz7b deletion completed in 6.119877737s • [SLOW TEST:7.335 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:56:00.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 10:56:00.475: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 10:56:00.483: INFO: Waiting for terminating namespaces to be deleted... May 1 10:56:00.485: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 10:56:00.490: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 10:56:00.490: INFO: Container kube-proxy ready: true, restart count 0 May 1 10:56:00.490: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 10:56:00.490: INFO: Container kindnet-cni ready: true, restart count 0 May 1 10:56:00.490: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 10:56:00.490: INFO: Container coredns ready: true, restart count 0 May 1 10:56:00.490: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 10:56:00.495: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 10:56:00.495: INFO: Container kindnet-cni ready: true, restart count 0 May 1 10:56:00.495: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 10:56:00.495: INFO: Container coredns ready: true, restart count 0 May 1 10:56:00.495: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 10:56:00.495: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 1 10:56:00.561: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 1 10:56:00.561: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 1 10:56:00.561: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 1 10:56:00.561: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 1 10:56:00.561: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 1 10:56:00.561: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-57fc9f3e-8b9a-11ea-88a3-0242ac110017.160ae23ac1afb320], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-k6tbj/filler-pod-57fc9f3e-8b9a-11ea-88a3-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fc9f3e-8b9a-11ea-88a3-0242ac110017.160ae23b10ec5fec], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fc9f3e-8b9a-11ea-88a3-0242ac110017.160ae23b673e463f], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fc9f3e-8b9a-11ea-88a3-0242ac110017.160ae23b84155c12], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fd8316-8b9a-11ea-88a3-0242ac110017.160ae23ac1aea4fa], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-k6tbj/filler-pod-57fd8316-8b9a-11ea-88a3-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fd8316-8b9a-11ea-88a3-0242ac110017.160ae23b555cc874], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fd8316-8b9a-11ea-88a3-0242ac110017.160ae23b9cea4bbb], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-57fd8316-8b9a-11ea-88a3-0242ac110017.160ae23bad36138e], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160ae23c284b021e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:56:07.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-k6tbj" for this suite. May 1 10:56:15.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:56:15.768: INFO: namespace: e2e-tests-sched-pred-k6tbj, resource: bindings, ignored listing per whitelist May 1 10:56:15.803: INFO: namespace e2e-tests-sched-pred-k6tbj deletion completed in 8.105603901s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:15.443 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:56:15.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-q2584 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 10:56:15.996: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 10:56:44.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostName&protocol=http&host=10.244.2.70&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-q2584 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:56:44.125: INFO: >>> kubeConfig: /root/.kube/config I0501 10:56:44.163260 6 log.go:172] (0xc001d2a2c0) (0xc0018ae1e0) Create stream I0501 10:56:44.163287 6 log.go:172] (0xc001d2a2c0) (0xc0018ae1e0) Stream added, broadcasting: 1 I0501 10:56:44.165792 6 log.go:172] (0xc001d2a2c0) Reply frame received for 1 I0501 10:56:44.165829 6 log.go:172] (0xc001d2a2c0) (0xc0018ae280) Create stream I0501 10:56:44.165843 6 log.go:172] (0xc001d2a2c0) (0xc0018ae280) Stream added, broadcasting: 3 I0501 10:56:44.166735 6 log.go:172] (0xc001d2a2c0) Reply frame received for 3 I0501 10:56:44.166773 6 log.go:172] (0xc001d2a2c0) (0xc001646820) Create stream I0501 10:56:44.166788 6 log.go:172] (0xc001d2a2c0) (0xc001646820) Stream added, broadcasting: 5 I0501 10:56:44.167690 6 log.go:172] (0xc001d2a2c0) Reply frame received for 5 I0501 10:56:44.213399 6 log.go:172] (0xc001d2a2c0) Data frame received for 3 I0501 10:56:44.213430 6 log.go:172] (0xc0018ae280) (3) Data frame handling I0501 10:56:44.213455 6 log.go:172] (0xc0018ae280) (3) Data frame sent I0501 10:56:44.213702 6 log.go:172] (0xc001d2a2c0) Data frame received for 3 I0501 10:56:44.213733 6 log.go:172] (0xc0018ae280) (3) Data frame handling I0501 10:56:44.213810 6 log.go:172] (0xc001d2a2c0) Data frame received for 5 I0501 10:56:44.213824 6 log.go:172] (0xc001646820) (5) Data frame handling I0501 10:56:44.215475 6 log.go:172] (0xc001d2a2c0) Data frame received for 1 I0501 10:56:44.215508 6 log.go:172] (0xc0018ae1e0) (1) Data frame handling I0501 10:56:44.215548 6 log.go:172] (0xc0018ae1e0) (1) Data frame sent I0501 10:56:44.215583 6 log.go:172] (0xc001d2a2c0) (0xc0018ae1e0) Stream removed, broadcasting: 1 I0501 10:56:44.215616 6 log.go:172] (0xc001d2a2c0) Go away received I0501 10:56:44.215719 6 log.go:172] (0xc001d2a2c0) (0xc0018ae1e0) Stream removed, broadcasting: 1 I0501 10:56:44.215752 6 log.go:172] (0xc001d2a2c0) (0xc0018ae280) Stream removed, broadcasting: 3 I0501 10:56:44.215766 6 log.go:172] (0xc001d2a2c0) (0xc001646820) Stream removed, broadcasting: 5 May 1 10:56:44.215: INFO: Waiting for endpoints: map[] May 1 10:56:44.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostName&protocol=http&host=10.244.1.28&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-q2584 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:56:44.219: INFO: >>> kubeConfig: /root/.kube/config I0501 10:56:44.255366 6 log.go:172] (0xc000d8a2c0) (0xc001eeeb40) Create stream I0501 10:56:44.255397 6 log.go:172] (0xc000d8a2c0) (0xc001eeeb40) Stream added, broadcasting: 1 I0501 10:56:44.258604 6 log.go:172] (0xc000d8a2c0) Reply frame received for 1 I0501 10:56:44.258681 6 log.go:172] (0xc000d8a2c0) (0xc001830000) Create stream I0501 10:56:44.258706 6 log.go:172] (0xc000d8a2c0) (0xc001830000) Stream added, broadcasting: 3 I0501 10:56:44.259747 6 log.go:172] (0xc000d8a2c0) Reply frame received for 3 I0501 10:56:44.259792 6 log.go:172] (0xc000d8a2c0) (0xc001830140) Create stream I0501 10:56:44.259814 6 log.go:172] (0xc000d8a2c0) (0xc001830140) Stream added, broadcasting: 5 I0501 10:56:44.260813 6 log.go:172] (0xc000d8a2c0) Reply frame received for 5 I0501 10:56:44.346103 6 log.go:172] (0xc000d8a2c0) Data frame received for 3 I0501 10:56:44.346138 6 log.go:172] (0xc001830000) (3) Data frame handling I0501 10:56:44.346167 6 log.go:172] (0xc001830000) (3) Data frame sent I0501 10:56:44.347212 6 log.go:172] (0xc000d8a2c0) Data frame received for 5 I0501 10:56:44.347239 6 log.go:172] (0xc001830140) (5) Data frame handling I0501 10:56:44.347404 6 log.go:172] (0xc000d8a2c0) Data frame received for 3 I0501 10:56:44.347420 6 log.go:172] (0xc001830000) (3) Data frame handling I0501 10:56:44.348732 6 log.go:172] (0xc000d8a2c0) Data frame received for 1 I0501 10:56:44.348781 6 log.go:172] (0xc001eeeb40) (1) Data frame handling I0501 10:56:44.348823 6 log.go:172] (0xc001eeeb40) (1) Data frame sent I0501 10:56:44.348845 6 log.go:172] (0xc000d8a2c0) (0xc001eeeb40) Stream removed, broadcasting: 1 I0501 10:56:44.348864 6 log.go:172] (0xc000d8a2c0) Go away received I0501 10:56:44.349037 6 log.go:172] (0xc000d8a2c0) (0xc001eeeb40) Stream removed, broadcasting: 1 I0501 10:56:44.349063 6 log.go:172] (0xc000d8a2c0) (0xc001830000) Stream removed, broadcasting: 3 I0501 10:56:44.349079 6 log.go:172] (0xc000d8a2c0) (0xc001830140) Stream removed, broadcasting: 5 May 1 10:56:44.349: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:56:44.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-q2584" for this suite. May 1 10:57:08.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:57:08.444: INFO: namespace: e2e-tests-pod-network-test-q2584, resource: bindings, ignored listing per whitelist May 1 10:57:08.493: INFO: namespace e2e-tests-pod-network-test-q2584 deletion completed in 24.139774354s • [SLOW TEST:52.690 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:57:08.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 10:57:13.184: INFO: Successfully updated pod "labelsupdate80887f21-8b9a-11ea-88a3-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:57:15.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kvpxc" for this suite. May 1 10:57:39.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:57:39.328: INFO: namespace: e2e-tests-projected-kvpxc, resource: bindings, ignored listing per whitelist May 1 10:57:39.350: INFO: namespace e2e-tests-projected-kvpxc deletion completed in 24.094259693s • [SLOW TEST:30.857 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:57:39.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 10:57:39.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-5q28r" to be "success or failure" May 1 10:57:39.505: INFO: Pod "downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 23.261779ms May 1 10:57:41.510: INFO: Pod "downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028274498s May 1 10:57:43.514: INFO: Pod "downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031493941s STEP: Saw pod success May 1 10:57:43.514: INFO: Pod "downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 10:57:43.516: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 10:57:43.651: INFO: Waiting for pod downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017 to disappear May 1 10:57:43.870: INFO: Pod downwardapi-volume-92f1d4e2-8b9a-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:57:43.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5q28r" for this suite. May 1 10:57:50.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:57:50.219: INFO: namespace: e2e-tests-projected-5q28r, resource: bindings, ignored listing per whitelist May 1 10:57:50.234: INFO: namespace e2e-tests-projected-5q28r deletion completed in 6.359802523s • [SLOW TEST:10.883 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:57:50.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 1 10:58:00.423: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.423: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.446727 6 log.go:172] (0xc000c942c0) (0xc001da9b80) Create stream I0501 10:58:00.446777 6 log.go:172] (0xc000c942c0) (0xc001da9b80) Stream added, broadcasting: 1 I0501 10:58:00.448903 6 log.go:172] (0xc000c942c0) Reply frame received for 1 I0501 10:58:00.448932 6 log.go:172] (0xc000c942c0) (0xc001be2fa0) Create stream I0501 10:58:00.448941 6 log.go:172] (0xc000c942c0) (0xc001be2fa0) Stream added, broadcasting: 3 I0501 10:58:00.449844 6 log.go:172] (0xc000c942c0) Reply frame received for 3 I0501 10:58:00.449880 6 log.go:172] (0xc000c942c0) (0xc001da9c20) Create stream I0501 10:58:00.449893 6 log.go:172] (0xc000c942c0) (0xc001da9c20) Stream added, broadcasting: 5 I0501 10:58:00.450631 6 log.go:172] (0xc000c942c0) Reply frame received for 5 I0501 10:58:00.529528 6 log.go:172] (0xc000c942c0) Data frame received for 5 I0501 10:58:00.529581 6 log.go:172] (0xc001da9c20) (5) Data frame handling I0501 10:58:00.529620 6 log.go:172] (0xc000c942c0) Data frame received for 3 I0501 10:58:00.529638 6 log.go:172] (0xc001be2fa0) (3) Data frame handling I0501 10:58:00.529660 6 log.go:172] (0xc001be2fa0) (3) Data frame sent I0501 10:58:00.529676 6 log.go:172] (0xc000c942c0) Data frame received for 3 I0501 10:58:00.529729 6 log.go:172] (0xc001be2fa0) (3) Data frame handling I0501 10:58:00.531739 6 log.go:172] (0xc000c942c0) Data frame received for 1 I0501 10:58:00.531769 6 log.go:172] (0xc001da9b80) (1) Data frame handling I0501 10:58:00.531782 6 log.go:172] (0xc001da9b80) (1) Data frame sent I0501 10:58:00.531809 6 log.go:172] (0xc000c942c0) (0xc001da9b80) Stream removed, broadcasting: 1 I0501 10:58:00.531919 6 log.go:172] (0xc000c942c0) (0xc001da9b80) Stream removed, broadcasting: 1 I0501 10:58:00.531937 6 log.go:172] (0xc000c942c0) (0xc001be2fa0) Stream removed, broadcasting: 3 I0501 10:58:00.531955 6 log.go:172] (0xc000c942c0) (0xc001da9c20) Stream removed, broadcasting: 5 May 1 10:58:00.531: INFO: Exec stderr: "" May 1 10:58:00.532: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.532: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.534607 6 log.go:172] (0xc000c942c0) Go away received I0501 10:58:00.564453 6 log.go:172] (0xc000c94790) (0xc001da9ea0) Create stream I0501 10:58:00.564483 6 log.go:172] (0xc000c94790) (0xc001da9ea0) Stream added, broadcasting: 1 I0501 10:58:00.566654 6 log.go:172] (0xc000c94790) Reply frame received for 1 I0501 10:58:00.566699 6 log.go:172] (0xc000c94790) (0xc00112d720) Create stream I0501 10:58:00.566715 6 log.go:172] (0xc000c94790) (0xc00112d720) Stream added, broadcasting: 3 I0501 10:58:00.567740 6 log.go:172] (0xc000c94790) Reply frame received for 3 I0501 10:58:00.567789 6 log.go:172] (0xc000c94790) (0xc00112d7c0) Create stream I0501 10:58:00.567800 6 log.go:172] (0xc000c94790) (0xc00112d7c0) Stream added, broadcasting: 5 I0501 10:58:00.568749 6 log.go:172] (0xc000c94790) Reply frame received for 5 I0501 10:58:00.647067 6 log.go:172] (0xc000c94790) Data frame received for 5 I0501 10:58:00.647107 6 log.go:172] (0xc00112d7c0) (5) Data frame handling I0501 10:58:00.647133 6 log.go:172] (0xc000c94790) Data frame received for 3 I0501 10:58:00.647144 6 log.go:172] (0xc00112d720) (3) Data frame handling I0501 10:58:00.647157 6 log.go:172] (0xc00112d720) (3) Data frame sent I0501 10:58:00.647236 6 log.go:172] (0xc000c94790) Data frame received for 3 I0501 10:58:00.647270 6 log.go:172] (0xc00112d720) (3) Data frame handling I0501 10:58:00.648634 6 log.go:172] (0xc000c94790) Data frame received for 1 I0501 10:58:00.648653 6 log.go:172] (0xc001da9ea0) (1) Data frame handling I0501 10:58:00.648669 6 log.go:172] (0xc001da9ea0) (1) Data frame sent I0501 10:58:00.648808 6 log.go:172] (0xc000c94790) (0xc001da9ea0) Stream removed, broadcasting: 1 I0501 10:58:00.648880 6 log.go:172] (0xc000c94790) Go away received I0501 10:58:00.648936 6 log.go:172] (0xc000c94790) (0xc001da9ea0) Stream removed, broadcasting: 1 I0501 10:58:00.648969 6 log.go:172] (0xc000c94790) (0xc00112d720) Stream removed, broadcasting: 3 I0501 10:58:00.648988 6 log.go:172] (0xc000c94790) (0xc00112d7c0) Stream removed, broadcasting: 5 May 1 10:58:00.649: INFO: Exec stderr: "" May 1 10:58:00.649: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.649: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.680663 6 log.go:172] (0xc0008ee2c0) (0xc0011463c0) Create stream I0501 10:58:00.680708 6 log.go:172] (0xc0008ee2c0) (0xc0011463c0) Stream added, broadcasting: 1 I0501 10:58:00.684753 6 log.go:172] (0xc0008ee2c0) Reply frame received for 1 I0501 10:58:00.684798 6 log.go:172] (0xc0008ee2c0) (0xc001146460) Create stream I0501 10:58:00.684815 6 log.go:172] (0xc0008ee2c0) (0xc001146460) Stream added, broadcasting: 3 I0501 10:58:00.686050 6 log.go:172] (0xc0008ee2c0) Reply frame received for 3 I0501 10:58:00.686081 6 log.go:172] (0xc0008ee2c0) (0xc00112d900) Create stream I0501 10:58:00.686092 6 log.go:172] (0xc0008ee2c0) (0xc00112d900) Stream added, broadcasting: 5 I0501 10:58:00.686912 6 log.go:172] (0xc0008ee2c0) Reply frame received for 5 I0501 10:58:00.738181 6 log.go:172] (0xc0008ee2c0) Data frame received for 5 I0501 10:58:00.738209 6 log.go:172] (0xc00112d900) (5) Data frame handling I0501 10:58:00.738254 6 log.go:172] (0xc0008ee2c0) Data frame received for 3 I0501 10:58:00.738308 6 log.go:172] (0xc001146460) (3) Data frame handling I0501 10:58:00.738351 6 log.go:172] (0xc001146460) (3) Data frame sent I0501 10:58:00.738366 6 log.go:172] (0xc0008ee2c0) Data frame received for 3 I0501 10:58:00.738379 6 log.go:172] (0xc001146460) (3) Data frame handling I0501 10:58:00.739382 6 log.go:172] (0xc0008ee2c0) Data frame received for 1 I0501 10:58:00.739400 6 log.go:172] (0xc0011463c0) (1) Data frame handling I0501 10:58:00.739414 6 log.go:172] (0xc0011463c0) (1) Data frame sent I0501 10:58:00.739437 6 log.go:172] (0xc0008ee2c0) (0xc0011463c0) Stream removed, broadcasting: 1 I0501 10:58:00.739536 6 log.go:172] (0xc0008ee2c0) (0xc0011463c0) Stream removed, broadcasting: 1 I0501 10:58:00.739549 6 log.go:172] (0xc0008ee2c0) (0xc001146460) Stream removed, broadcasting: 3 I0501 10:58:00.739597 6 log.go:172] (0xc0008ee2c0) Go away received I0501 10:58:00.739637 6 log.go:172] (0xc0008ee2c0) (0xc00112d900) Stream removed, broadcasting: 5 May 1 10:58:00.739: INFO: Exec stderr: "" May 1 10:58:00.739: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.739: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.769990 6 log.go:172] (0xc000e662c0) (0xc001be3220) Create stream I0501 10:58:00.770020 6 log.go:172] (0xc000e662c0) (0xc001be3220) Stream added, broadcasting: 1 I0501 10:58:00.772318 6 log.go:172] (0xc000e662c0) Reply frame received for 1 I0501 10:58:00.772367 6 log.go:172] (0xc000e662c0) (0xc0011c1cc0) Create stream I0501 10:58:00.772392 6 log.go:172] (0xc000e662c0) (0xc0011c1cc0) Stream added, broadcasting: 3 I0501 10:58:00.773621 6 log.go:172] (0xc000e662c0) Reply frame received for 3 I0501 10:58:00.773660 6 log.go:172] (0xc000e662c0) (0xc0011c1d60) Create stream I0501 10:58:00.773672 6 log.go:172] (0xc000e662c0) (0xc0011c1d60) Stream added, broadcasting: 5 I0501 10:58:00.774668 6 log.go:172] (0xc000e662c0) Reply frame received for 5 I0501 10:58:00.846237 6 log.go:172] (0xc000e662c0) Data frame received for 5 I0501 10:58:00.846266 6 log.go:172] (0xc0011c1d60) (5) Data frame handling I0501 10:58:00.846284 6 log.go:172] (0xc000e662c0) Data frame received for 3 I0501 10:58:00.846291 6 log.go:172] (0xc0011c1cc0) (3) Data frame handling I0501 10:58:00.846303 6 log.go:172] (0xc0011c1cc0) (3) Data frame sent I0501 10:58:00.846308 6 log.go:172] (0xc000e662c0) Data frame received for 3 I0501 10:58:00.846317 6 log.go:172] (0xc0011c1cc0) (3) Data frame handling I0501 10:58:00.847815 6 log.go:172] (0xc000e662c0) Data frame received for 1 I0501 10:58:00.847845 6 log.go:172] (0xc001be3220) (1) Data frame handling I0501 10:58:00.847863 6 log.go:172] (0xc001be3220) (1) Data frame sent I0501 10:58:00.847882 6 log.go:172] (0xc000e662c0) (0xc001be3220) Stream removed, broadcasting: 1 I0501 10:58:00.847908 6 log.go:172] (0xc000e662c0) Go away received I0501 10:58:00.848014 6 log.go:172] (0xc000e662c0) (0xc001be3220) Stream removed, broadcasting: 1 I0501 10:58:00.848059 6 log.go:172] (0xc000e662c0) (0xc0011c1cc0) Stream removed, broadcasting: 3 I0501 10:58:00.848078 6 log.go:172] (0xc000e662c0) (0xc0011c1d60) Stream removed, broadcasting: 5 May 1 10:58:00.848: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 1 10:58:00.848: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.848: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.885861 6 log.go:172] (0xc000e66790) (0xc001be34a0) Create stream I0501 10:58:00.885885 6 log.go:172] (0xc000e66790) (0xc001be34a0) Stream added, broadcasting: 1 I0501 10:58:00.893751 6 log.go:172] (0xc000e66790) Reply frame received for 1 I0501 10:58:00.893796 6 log.go:172] (0xc000e66790) (0xc001830000) Create stream I0501 10:58:00.893807 6 log.go:172] (0xc000e66790) (0xc001830000) Stream added, broadcasting: 3 I0501 10:58:00.894768 6 log.go:172] (0xc000e66790) Reply frame received for 3 I0501 10:58:00.894810 6 log.go:172] (0xc000e66790) (0xc000d22000) Create stream I0501 10:58:00.894823 6 log.go:172] (0xc000e66790) (0xc000d22000) Stream added, broadcasting: 5 I0501 10:58:00.895522 6 log.go:172] (0xc000e66790) Reply frame received for 5 I0501 10:58:00.958105 6 log.go:172] (0xc000e66790) Data frame received for 5 I0501 10:58:00.958164 6 log.go:172] (0xc000d22000) (5) Data frame handling I0501 10:58:00.958207 6 log.go:172] (0xc000e66790) Data frame received for 3 I0501 10:58:00.958228 6 log.go:172] (0xc001830000) (3) Data frame handling I0501 10:58:00.958260 6 log.go:172] (0xc001830000) (3) Data frame sent I0501 10:58:00.958283 6 log.go:172] (0xc000e66790) Data frame received for 3 I0501 10:58:00.958301 6 log.go:172] (0xc001830000) (3) Data frame handling I0501 10:58:00.959722 6 log.go:172] (0xc000e66790) Data frame received for 1 I0501 10:58:00.959751 6 log.go:172] (0xc001be34a0) (1) Data frame handling I0501 10:58:00.959767 6 log.go:172] (0xc001be34a0) (1) Data frame sent I0501 10:58:00.959789 6 log.go:172] (0xc000e66790) (0xc001be34a0) Stream removed, broadcasting: 1 I0501 10:58:00.959817 6 log.go:172] (0xc000e66790) Go away received I0501 10:58:00.959965 6 log.go:172] (0xc000e66790) (0xc001be34a0) Stream removed, broadcasting: 1 I0501 10:58:00.960000 6 log.go:172] (0xc000e66790) (0xc001830000) Stream removed, broadcasting: 3 I0501 10:58:00.960076 6 log.go:172] (0xc000e66790) (0xc000d22000) Stream removed, broadcasting: 5 May 1 10:58:00.960: INFO: Exec stderr: "" May 1 10:58:00.960: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:00.960: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:00.995332 6 log.go:172] (0xc0008ee2c0) (0xc0018ae320) Create stream I0501 10:58:00.995357 6 log.go:172] (0xc0008ee2c0) (0xc0018ae320) Stream added, broadcasting: 1 I0501 10:58:00.997386 6 log.go:172] (0xc0008ee2c0) Reply frame received for 1 I0501 10:58:00.997438 6 log.go:172] (0xc0008ee2c0) (0xc0018ae460) Create stream I0501 10:58:00.997451 6 log.go:172] (0xc0008ee2c0) (0xc0018ae460) Stream added, broadcasting: 3 I0501 10:58:00.998379 6 log.go:172] (0xc0008ee2c0) Reply frame received for 3 I0501 10:58:00.998411 6 log.go:172] (0xc0008ee2c0) (0xc0018ae500) Create stream I0501 10:58:00.998420 6 log.go:172] (0xc0008ee2c0) (0xc0018ae500) Stream added, broadcasting: 5 I0501 10:58:00.999381 6 log.go:172] (0xc0008ee2c0) Reply frame received for 5 I0501 10:58:01.076897 6 log.go:172] (0xc0008ee2c0) Data frame received for 5 I0501 10:58:01.076951 6 log.go:172] (0xc0018ae500) (5) Data frame handling I0501 10:58:01.076993 6 log.go:172] (0xc0008ee2c0) Data frame received for 3 I0501 10:58:01.077026 6 log.go:172] (0xc0018ae460) (3) Data frame handling I0501 10:58:01.077055 6 log.go:172] (0xc0018ae460) (3) Data frame sent I0501 10:58:01.077072 6 log.go:172] (0xc0008ee2c0) Data frame received for 3 I0501 10:58:01.077086 6 log.go:172] (0xc0018ae460) (3) Data frame handling I0501 10:58:01.078931 6 log.go:172] (0xc0008ee2c0) Data frame received for 1 I0501 10:58:01.078961 6 log.go:172] (0xc0018ae320) (1) Data frame handling I0501 10:58:01.078973 6 log.go:172] (0xc0018ae320) (1) Data frame sent I0501 10:58:01.078987 6 log.go:172] (0xc0008ee2c0) (0xc0018ae320) Stream removed, broadcasting: 1 I0501 10:58:01.079004 6 log.go:172] (0xc0008ee2c0) Go away received I0501 10:58:01.079175 6 log.go:172] (0xc0008ee2c0) (0xc0018ae320) Stream removed, broadcasting: 1 I0501 10:58:01.079216 6 log.go:172] (0xc0008ee2c0) (0xc0018ae460) Stream removed, broadcasting: 3 I0501 10:58:01.079247 6 log.go:172] (0xc0008ee2c0) (0xc0018ae500) Stream removed, broadcasting: 5 May 1 10:58:01.079: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 1 10:58:01.079: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:01.079: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:01.112270 6 log.go:172] (0xc000ab0580) (0xc0011d8280) Create stream I0501 10:58:01.112302 6 log.go:172] (0xc000ab0580) (0xc0011d8280) Stream added, broadcasting: 1 I0501 10:58:01.115336 6 log.go:172] (0xc000ab0580) Reply frame received for 1 I0501 10:58:01.115393 6 log.go:172] (0xc000ab0580) (0xc000d22140) Create stream I0501 10:58:01.115410 6 log.go:172] (0xc000ab0580) (0xc000d22140) Stream added, broadcasting: 3 I0501 10:58:01.116576 6 log.go:172] (0xc000ab0580) Reply frame received for 3 I0501 10:58:01.116638 6 log.go:172] (0xc000ab0580) (0xc000d22280) Create stream I0501 10:58:01.116655 6 log.go:172] (0xc000ab0580) (0xc000d22280) Stream added, broadcasting: 5 I0501 10:58:01.117947 6 log.go:172] (0xc000ab0580) Reply frame received for 5 I0501 10:58:01.197597 6 log.go:172] (0xc000ab0580) Data frame received for 5 I0501 10:58:01.197624 6 log.go:172] (0xc000d22280) (5) Data frame handling I0501 10:58:01.197673 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 10:58:01.197710 6 log.go:172] (0xc000d22140) (3) Data frame handling I0501 10:58:01.197738 6 log.go:172] (0xc000d22140) (3) Data frame sent I0501 10:58:01.197758 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 10:58:01.197775 6 log.go:172] (0xc000d22140) (3) Data frame handling I0501 10:58:01.199927 6 log.go:172] (0xc000ab0580) Data frame received for 1 I0501 10:58:01.199985 6 log.go:172] (0xc0011d8280) (1) Data frame handling I0501 10:58:01.200017 6 log.go:172] (0xc0011d8280) (1) Data frame sent I0501 10:58:01.200048 6 log.go:172] (0xc000ab0580) (0xc0011d8280) Stream removed, broadcasting: 1 I0501 10:58:01.200072 6 log.go:172] (0xc000ab0580) Go away received I0501 10:58:01.200203 6 log.go:172] (0xc000ab0580) (0xc0011d8280) Stream removed, broadcasting: 1 I0501 10:58:01.200237 6 log.go:172] (0xc000ab0580) (0xc000d22140) Stream removed, broadcasting: 3 I0501 10:58:01.200254 6 log.go:172] (0xc000ab0580) (0xc000d22280) Stream removed, broadcasting: 5 May 1 10:58:01.200: INFO: Exec stderr: "" May 1 10:58:01.200: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:01.200: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:01.237570 6 log.go:172] (0xc000e662c0) (0xc000d22780) Create stream I0501 10:58:01.237602 6 log.go:172] (0xc000e662c0) (0xc000d22780) Stream added, broadcasting: 1 I0501 10:58:01.240129 6 log.go:172] (0xc000e662c0) Reply frame received for 1 I0501 10:58:01.240174 6 log.go:172] (0xc000e662c0) (0xc000d500a0) Create stream I0501 10:58:01.240191 6 log.go:172] (0xc000e662c0) (0xc000d500a0) Stream added, broadcasting: 3 I0501 10:58:01.241370 6 log.go:172] (0xc000e662c0) Reply frame received for 3 I0501 10:58:01.241431 6 log.go:172] (0xc000e662c0) (0xc0018ae5a0) Create stream I0501 10:58:01.241447 6 log.go:172] (0xc000e662c0) (0xc0018ae5a0) Stream added, broadcasting: 5 I0501 10:58:01.242308 6 log.go:172] (0xc000e662c0) Reply frame received for 5 I0501 10:58:01.291943 6 log.go:172] (0xc000e662c0) Data frame received for 3 I0501 10:58:01.291991 6 log.go:172] (0xc000d500a0) (3) Data frame handling I0501 10:58:01.292038 6 log.go:172] (0xc000e662c0) Data frame received for 5 I0501 10:58:01.292090 6 log.go:172] (0xc0018ae5a0) (5) Data frame handling I0501 10:58:01.292126 6 log.go:172] (0xc000d500a0) (3) Data frame sent I0501 10:58:01.292157 6 log.go:172] (0xc000e662c0) Data frame received for 3 I0501 10:58:01.292188 6 log.go:172] (0xc000d500a0) (3) Data frame handling I0501 10:58:01.293881 6 log.go:172] (0xc000e662c0) Data frame received for 1 I0501 10:58:01.293894 6 log.go:172] (0xc000d22780) (1) Data frame handling I0501 10:58:01.293907 6 log.go:172] (0xc000d22780) (1) Data frame sent I0501 10:58:01.293916 6 log.go:172] (0xc000e662c0) (0xc000d22780) Stream removed, broadcasting: 1 I0501 10:58:01.293981 6 log.go:172] (0xc000e662c0) Go away received I0501 10:58:01.294027 6 log.go:172] (0xc000e662c0) (0xc000d22780) Stream removed, broadcasting: 1 I0501 10:58:01.294044 6 log.go:172] (0xc000e662c0) (0xc000d500a0) Stream removed, broadcasting: 3 I0501 10:58:01.294054 6 log.go:172] (0xc000e662c0) (0xc0018ae5a0) Stream removed, broadcasting: 5 May 1 10:58:01.294: INFO: Exec stderr: "" May 1 10:58:01.294: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:01.294: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:01.328535 6 log.go:172] (0xc000e66840) (0xc000d22c80) Create stream I0501 10:58:01.328558 6 log.go:172] (0xc000e66840) (0xc000d22c80) Stream added, broadcasting: 1 I0501 10:58:01.339866 6 log.go:172] (0xc000e66840) Reply frame received for 1 I0501 10:58:01.339926 6 log.go:172] (0xc000e66840) (0xc0018ae640) Create stream I0501 10:58:01.339940 6 log.go:172] (0xc000e66840) (0xc0018ae640) Stream added, broadcasting: 3 I0501 10:58:01.341884 6 log.go:172] (0xc000e66840) Reply frame received for 3 I0501 10:58:01.341948 6 log.go:172] (0xc000e66840) (0xc001830140) Create stream I0501 10:58:01.341964 6 log.go:172] (0xc000e66840) (0xc001830140) Stream added, broadcasting: 5 I0501 10:58:01.345099 6 log.go:172] (0xc000e66840) Reply frame received for 5 I0501 10:58:01.414516 6 log.go:172] (0xc000e66840) Data frame received for 5 I0501 10:58:01.414563 6 log.go:172] (0xc001830140) (5) Data frame handling I0501 10:58:01.414593 6 log.go:172] (0xc000e66840) Data frame received for 3 I0501 10:58:01.414604 6 log.go:172] (0xc0018ae640) (3) Data frame handling I0501 10:58:01.414613 6 log.go:172] (0xc0018ae640) (3) Data frame sent I0501 10:58:01.414627 6 log.go:172] (0xc000e66840) Data frame received for 3 I0501 10:58:01.414637 6 log.go:172] (0xc0018ae640) (3) Data frame handling I0501 10:58:01.416299 6 log.go:172] (0xc000e66840) Data frame received for 1 I0501 10:58:01.416322 6 log.go:172] (0xc000d22c80) (1) Data frame handling I0501 10:58:01.416334 6 log.go:172] (0xc000d22c80) (1) Data frame sent I0501 10:58:01.416350 6 log.go:172] (0xc000e66840) (0xc000d22c80) Stream removed, broadcasting: 1 I0501 10:58:01.416368 6 log.go:172] (0xc000e66840) Go away received I0501 10:58:01.416549 6 log.go:172] (0xc000e66840) (0xc000d22c80) Stream removed, broadcasting: 1 I0501 10:58:01.416610 6 log.go:172] (0xc000e66840) (0xc0018ae640) Stream removed, broadcasting: 3 I0501 10:58:01.416638 6 log.go:172] (0xc000e66840) (0xc001830140) Stream removed, broadcasting: 5 May 1 10:58:01.416: INFO: Exec stderr: "" May 1 10:58:01.416: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-q9876 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 10:58:01.416: INFO: >>> kubeConfig: /root/.kube/config I0501 10:58:01.450849 6 log.go:172] (0xc0000eaf20) (0xc000d50500) Create stream I0501 10:58:01.450880 6 log.go:172] (0xc0000eaf20) (0xc000d50500) Stream added, broadcasting: 1 I0501 10:58:01.452552 6 log.go:172] (0xc0000eaf20) Reply frame received for 1 I0501 10:58:01.452614 6 log.go:172] (0xc0000eaf20) (0xc000d22dc0) Create stream I0501 10:58:01.452631 6 log.go:172] (0xc0000eaf20) (0xc000d22dc0) Stream added, broadcasting: 3 I0501 10:58:01.453594 6 log.go:172] (0xc0000eaf20) Reply frame received for 3 I0501 10:58:01.453627 6 log.go:172] (0xc0000eaf20) (0xc000d505a0) Create stream I0501 10:58:01.453639 6 log.go:172] (0xc0000eaf20) (0xc000d505a0) Stream added, broadcasting: 5 I0501 10:58:01.454648 6 log.go:172] (0xc0000eaf20) Reply frame received for 5 I0501 10:58:01.511827 6 log.go:172] (0xc0000eaf20) Data frame received for 5 I0501 10:58:01.511863 6 log.go:172] (0xc000d505a0) (5) Data frame handling I0501 10:58:01.511913 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0501 10:58:01.511949 6 log.go:172] (0xc000d22dc0) (3) Data frame handling I0501 10:58:01.512004 6 log.go:172] (0xc000d22dc0) (3) Data frame sent I0501 10:58:01.512033 6 log.go:172] (0xc0000eaf20) Data frame received for 3 I0501 10:58:01.512047 6 log.go:172] (0xc000d22dc0) (3) Data frame handling I0501 10:58:01.513945 6 log.go:172] (0xc0000eaf20) Data frame received for 1 I0501 10:58:01.513980 6 log.go:172] (0xc000d50500) (1) Data frame handling I0501 10:58:01.514021 6 log.go:172] (0xc000d50500) (1) Data frame sent I0501 10:58:01.514054 6 log.go:172] (0xc0000eaf20) (0xc000d50500) Stream removed, broadcasting: 1 I0501 10:58:01.514115 6 log.go:172] (0xc0000eaf20) Go away received I0501 10:58:01.514235 6 log.go:172] (0xc0000eaf20) (0xc000d50500) Stream removed, broadcasting: 1 I0501 10:58:01.514273 6 log.go:172] (0xc0000eaf20) (0xc000d22dc0) Stream removed, broadcasting: 3 I0501 10:58:01.514305 6 log.go:172] (0xc0000eaf20) (0xc000d505a0) Stream removed, broadcasting: 5 May 1 10:58:01.514: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:58:01.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-q9876" for this suite. May 1 10:58:55.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:58:55.615: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-q9876, resource: bindings, ignored listing per whitelist May 1 10:58:55.631: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-q9876 deletion completed in 54.111640318s • [SLOW TEST:65.397 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:58:55.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 1 10:58:55.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-c28cj' May 1 10:58:56.223: INFO: stderr: "" May 1 10:58:56.223: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 10:58:56.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c28cj' May 1 10:58:56.358: INFO: stderr: "" May 1 10:58:56.358: INFO: stdout: "update-demo-nautilus-4kkfh update-demo-nautilus-xf7c2 " May 1 10:58:56.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4kkfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:58:56.461: INFO: stderr: "" May 1 10:58:56.461: INFO: stdout: "" May 1 10:58:56.461: INFO: update-demo-nautilus-4kkfh is created but not running May 1 10:59:01.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:01.569: INFO: stderr: "" May 1 10:59:01.569: INFO: stdout: "update-demo-nautilus-4kkfh update-demo-nautilus-xf7c2 " May 1 10:59:01.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4kkfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:01.659: INFO: stderr: "" May 1 10:59:01.659: INFO: stdout: "true" May 1 10:59:01.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4kkfh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:01.758: INFO: stderr: "" May 1 10:59:01.758: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 10:59:01.758: INFO: validating pod update-demo-nautilus-4kkfh May 1 10:59:01.763: INFO: got data: { "image": "nautilus.jpg" } May 1 10:59:01.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 10:59:01.763: INFO: update-demo-nautilus-4kkfh is verified up and running May 1 10:59:01.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf7c2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:01.857: INFO: stderr: "" May 1 10:59:01.857: INFO: stdout: "true" May 1 10:59:01.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xf7c2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:01.959: INFO: stderr: "" May 1 10:59:01.959: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 10:59:01.959: INFO: validating pod update-demo-nautilus-xf7c2 May 1 10:59:01.963: INFO: got data: { "image": "nautilus.jpg" } May 1 10:59:01.963: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 10:59:01.963: INFO: update-demo-nautilus-xf7c2 is verified up and running STEP: rolling-update to new replication controller May 1 10:59:01.965: INFO: scanned /root for discovery docs: May 1 10:59:01.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:24.563: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 1 10:59:24.563: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 10:59:24.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:24.657: INFO: stderr: "" May 1 10:59:24.657: INFO: stdout: "update-demo-kitten-8hxd7 update-demo-kitten-x2rd7 " May 1 10:59:24.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8hxd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:24.747: INFO: stderr: "" May 1 10:59:24.747: INFO: stdout: "true" May 1 10:59:24.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8hxd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:24.842: INFO: stderr: "" May 1 10:59:24.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 10:59:24.842: INFO: validating pod update-demo-kitten-8hxd7 May 1 10:59:24.852: INFO: got data: { "image": "kitten.jpg" } May 1 10:59:24.852: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 10:59:24.852: INFO: update-demo-kitten-8hxd7 is verified up and running May 1 10:59:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x2rd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:24.952: INFO: stderr: "" May 1 10:59:24.952: INFO: stdout: "true" May 1 10:59:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-x2rd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-c28cj' May 1 10:59:25.048: INFO: stderr: "" May 1 10:59:25.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 1 10:59:25.048: INFO: validating pod update-demo-kitten-x2rd7 May 1 10:59:25.052: INFO: got data: { "image": "kitten.jpg" } May 1 10:59:25.052: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 1 10:59:25.052: INFO: update-demo-kitten-x2rd7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:59:25.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c28cj" for this suite. May 1 10:59:47.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 10:59:47.112: INFO: namespace: e2e-tests-kubectl-c28cj, resource: bindings, ignored listing per whitelist May 1 10:59:47.159: INFO: namespace e2e-tests-kubectl-c28cj deletion completed in 22.103873791s • [SLOW TEST:51.528 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 10:59:47.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 10:59:51.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gmkpm" for this suite. May 1 11:00:33.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:00:33.409: INFO: namespace: e2e-tests-kubelet-test-gmkpm, resource: bindings, ignored listing per whitelist May 1 11:00:33.419: INFO: namespace e2e-tests-kubelet-test-gmkpm deletion completed in 42.093144237s • [SLOW TEST:46.259 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:00:33.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 1 11:00:40.623: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:00:41.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gc27l" for this suite. May 1 11:01:05.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:01:05.691: INFO: namespace: e2e-tests-replicaset-gc27l, resource: bindings, ignored listing per whitelist May 1 11:01:05.732: INFO: namespace e2e-tests-replicaset-gc27l deletion completed in 24.08173103s • [SLOW TEST:32.313 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:01:05.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-hkgv STEP: Creating a pod to test atomic-volume-subpath May 1 11:01:05.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hkgv" in namespace "e2e-tests-subpath-mcd9t" to be "success or failure" May 1 11:01:05.858: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.789152ms May 1 11:01:07.862: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019007187s May 1 11:01:09.866: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022937768s May 1 11:01:11.871: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02805694s May 1 11:01:13.875: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 8.032012642s May 1 11:01:15.880: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 10.036841685s May 1 11:01:17.884: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 12.040840251s May 1 11:01:19.888: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 14.045120889s May 1 11:01:21.892: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 16.048600414s May 1 11:01:23.896: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 18.05278685s May 1 11:01:25.901: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 20.057320065s May 1 11:01:27.905: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 22.06153881s May 1 11:01:29.908: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Running", Reason="", readiness=false. Elapsed: 24.065067643s May 1 11:01:31.912: INFO: Pod "pod-subpath-test-configmap-hkgv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.068793699s STEP: Saw pod success May 1 11:01:31.912: INFO: Pod "pod-subpath-test-configmap-hkgv" satisfied condition "success or failure" May 1 11:01:31.915: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-hkgv container test-container-subpath-configmap-hkgv: STEP: delete the pod May 1 11:01:31.968: INFO: Waiting for pod pod-subpath-test-configmap-hkgv to disappear May 1 11:01:32.034: INFO: Pod pod-subpath-test-configmap-hkgv no longer exists STEP: Deleting pod pod-subpath-test-configmap-hkgv May 1 11:01:32.034: INFO: Deleting pod "pod-subpath-test-configmap-hkgv" in namespace "e2e-tests-subpath-mcd9t" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:01:32.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mcd9t" for this suite. May 1 11:01:38.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:01:38.098: INFO: namespace: e2e-tests-subpath-mcd9t, resource: bindings, ignored listing per whitelist May 1 11:01:38.140: INFO: namespace e2e-tests-subpath-mcd9t deletion completed in 6.099160262s • [SLOW TEST:32.408 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:01:38.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 1 11:01:38.248: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:01:38.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lb8ks" for this suite. May 1 11:01:44.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:01:44.416: INFO: namespace: e2e-tests-kubectl-lb8ks, resource: bindings, ignored listing per whitelist May 1 11:01:44.433: INFO: namespace e2e-tests-kubectl-lb8ks deletion completed in 6.08204992s • [SLOW TEST:6.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:01:44.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 11:01:44.599: INFO: Waiting up to 5m0s for pod "pod-2509946f-8b9b-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-824sr" to be "success or failure" May 1 11:01:44.646: INFO: Pod "pod-2509946f-8b9b-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 47.073437ms May 1 11:01:46.651: INFO: Pod "pod-2509946f-8b9b-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051617906s May 1 11:01:48.655: INFO: Pod "pod-2509946f-8b9b-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055956901s STEP: Saw pod success May 1 11:01:48.655: INFO: Pod "pod-2509946f-8b9b-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:01:48.659: INFO: Trying to get logs from node hunter-worker2 pod pod-2509946f-8b9b-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:01:48.688: INFO: Waiting for pod pod-2509946f-8b9b-11ea-88a3-0242ac110017 to disappear May 1 11:01:48.980: INFO: Pod pod-2509946f-8b9b-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:01:48.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-824sr" for this suite. May 1 11:01:55.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:01:55.195: INFO: namespace: e2e-tests-emptydir-824sr, resource: bindings, ignored listing per whitelist May 1 11:01:55.215: INFO: namespace e2e-tests-emptydir-824sr deletion completed in 6.229850967s • [SLOW TEST:10.782 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:01:55.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 11:01:56.108: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:01:56.111: INFO: Number of nodes with available pods: 0 May 1 11:01:56.111: INFO: Node hunter-worker is running more than one daemon pod May 1 11:01:57.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:01:57.120: INFO: Number of nodes with available pods: 0 May 1 11:01:57.120: INFO: Node hunter-worker is running more than one daemon pod May 1 11:01:58.449: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:01:58.530: INFO: Number of nodes with available pods: 0 May 1 11:01:58.530: INFO: Node hunter-worker is running more than one daemon pod May 1 11:01:59.156: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:01:59.159: INFO: Number of nodes with available pods: 0 May 1 11:01:59.159: INFO: Node hunter-worker is running more than one daemon pod May 1 11:02:00.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:02:00.145: INFO: Number of nodes with available pods: 0 May 1 11:02:00.145: INFO: Node hunter-worker is running more than one daemon pod May 1 11:02:01.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:02:01.119: INFO: Number of nodes with available pods: 2 May 1 11:02:01.119: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 1 11:02:01.149: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:02:01.166: INFO: Number of nodes with available pods: 2 May 1 11:02:01.166: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cmqrz, will wait for the garbage collector to delete the pods May 1 11:02:02.262: INFO: Deleting DaemonSet.extensions daemon-set took: 6.909192ms May 1 11:02:02.362: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.311665ms May 1 11:02:11.783: INFO: Number of nodes with available pods: 0 May 1 11:02:11.783: INFO: Number of running nodes: 0, number of available pods: 0 May 1 11:02:11.789: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cmqrz/daemonsets","resourceVersion":"8151588"},"items":null} May 1 11:02:11.792: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cmqrz/pods","resourceVersion":"8151588"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:02:11.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cmqrz" for this suite. May 1 11:02:17.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:02:17.876: INFO: namespace: e2e-tests-daemonsets-cmqrz, resource: bindings, ignored listing per whitelist May 1 11:02:17.894: INFO: namespace e2e-tests-daemonsets-cmqrz deletion completed in 6.088712685s • [SLOW TEST:22.679 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:02:17.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 1 11:02:18.744: INFO: Pod name wrapped-volume-race-395fd6a3-8b9b-11ea-88a3-0242ac110017: Found 0 pods out of 5 May 1 11:02:23.756: INFO: Pod name wrapped-volume-race-395fd6a3-8b9b-11ea-88a3-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-395fd6a3-8b9b-11ea-88a3-0242ac110017 in namespace e2e-tests-emptydir-wrapper-tx5w4, will wait for the garbage collector to delete the pods May 1 11:04:15.839: INFO: Deleting ReplicationController wrapped-volume-race-395fd6a3-8b9b-11ea-88a3-0242ac110017 took: 9.051471ms May 1 11:04:16.339: INFO: Terminating ReplicationController wrapped-volume-race-395fd6a3-8b9b-11ea-88a3-0242ac110017 pods took: 500.284135ms STEP: Creating RC which spawns configmap-volume pods May 1 11:05:02.211: INFO: Pod name wrapped-volume-race-9acae39a-8b9b-11ea-88a3-0242ac110017: Found 0 pods out of 5 May 1 11:05:07.218: INFO: Pod name wrapped-volume-race-9acae39a-8b9b-11ea-88a3-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9acae39a-8b9b-11ea-88a3-0242ac110017 in namespace e2e-tests-emptydir-wrapper-tx5w4, will wait for the garbage collector to delete the pods May 1 11:07:23.304: INFO: Deleting ReplicationController wrapped-volume-race-9acae39a-8b9b-11ea-88a3-0242ac110017 took: 7.76784ms May 1 11:07:23.405: INFO: Terminating ReplicationController wrapped-volume-race-9acae39a-8b9b-11ea-88a3-0242ac110017 pods took: 100.44003ms STEP: Creating RC which spawns configmap-volume pods May 1 11:08:01.747: INFO: Pod name wrapped-volume-race-05d267dc-8b9c-11ea-88a3-0242ac110017: Found 0 pods out of 5 May 1 11:08:06.755: INFO: Pod name wrapped-volume-race-05d267dc-8b9c-11ea-88a3-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05d267dc-8b9c-11ea-88a3-0242ac110017 in namespace e2e-tests-emptydir-wrapper-tx5w4, will wait for the garbage collector to delete the pods May 1 11:10:42.898: INFO: Deleting ReplicationController wrapped-volume-race-05d267dc-8b9c-11ea-88a3-0242ac110017 took: 7.79574ms May 1 11:10:42.998: INFO: Terminating ReplicationController wrapped-volume-race-05d267dc-8b9c-11ea-88a3-0242ac110017 pods took: 100.22439ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:11:22.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tx5w4" for this suite. May 1 11:11:30.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:11:30.287: INFO: namespace: e2e-tests-emptydir-wrapper-tx5w4, resource: bindings, ignored listing per whitelist May 1 11:11:30.336: INFO: namespace e2e-tests-emptydir-wrapper-tx5w4 deletion completed in 8.0897059s • [SLOW TEST:552.442 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:11:30.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 1 11:11:34.745: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:11:58.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-r5zct" for this suite. May 1 11:12:04.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:12:04.886: INFO: namespace: e2e-tests-namespaces-r5zct, resource: bindings, ignored listing per whitelist May 1 11:12:04.907: INFO: namespace e2e-tests-namespaces-r5zct deletion completed in 6.061267678s STEP: Destroying namespace "e2e-tests-nsdeletetest-tcwvd" for this suite. May 1 11:12:04.908: INFO: Namespace e2e-tests-nsdeletetest-tcwvd was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-rq25b" for this suite. May 1 11:12:11.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:12:11.098: INFO: namespace: e2e-tests-nsdeletetest-rq25b, resource: bindings, ignored listing per whitelist May 1 11:12:11.098: INFO: namespace e2e-tests-nsdeletetest-rq25b deletion completed in 6.189164474s • [SLOW TEST:40.761 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:12:11.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-2jbkn May 1 11:12:15.443: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-2jbkn STEP: checking the pod's current state and verifying that restartCount is present May 1 11:12:15.446: INFO: Initial restart count of pod liveness-exec is 0 May 1 11:13:07.649: INFO: Restart count of pod e2e-tests-container-probe-2jbkn/liveness-exec is now 1 (52.203060794s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:13:07.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2jbkn" for this suite. May 1 11:13:13.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:13:13.728: INFO: namespace: e2e-tests-container-probe-2jbkn, resource: bindings, ignored listing per whitelist May 1 11:13:13.745: INFO: namespace e2e-tests-container-probe-2jbkn deletion completed in 6.077371995s • [SLOW TEST:62.647 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:13:13.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-bfdd9a5e-8b9c-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:13:13.855: INFO: Waiting up to 5m0s for pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-hbb24" to be "success or failure" May 1 11:13:13.869: INFO: Pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.345428ms May 1 11:13:15.872: INFO: Pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017283357s May 1 11:13:17.919: INFO: Pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.064429159s May 1 11:13:19.972: INFO: Pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117393524s STEP: Saw pod success May 1 11:13:19.972: INFO: Pod "pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:13:19.975: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 11:13:20.048: INFO: Waiting for pod pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017 to disappear May 1 11:13:20.134: INFO: Pod pod-secrets-bfdf8324-8b9c-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:13:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hbb24" for this suite. May 1 11:13:26.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:13:26.195: INFO: namespace: e2e-tests-secrets-hbb24, resource: bindings, ignored listing per whitelist May 1 11:13:26.228: INFO: namespace e2e-tests-secrets-hbb24 deletion completed in 6.090428306s • [SLOW TEST:12.483 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:13:26.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:13:26.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-dc9lm" to be "success or failure" May 1 11:13:26.336: INFO: Pod "downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.032826ms May 1 11:13:28.362: INFO: Pod "downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044706031s May 1 11:13:30.365: INFO: Pod "downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048374905s STEP: Saw pod success May 1 11:13:30.365: INFO: Pod "downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:13:30.368: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:13:30.478: INFO: Waiting for pod downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017 to disappear May 1 11:13:30.625: INFO: Pod downwardapi-volume-c74d2ac1-8b9c-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:13:30.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dc9lm" for this suite. May 1 11:13:36.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:13:36.948: INFO: namespace: e2e-tests-projected-dc9lm, resource: bindings, ignored listing per whitelist May 1 11:13:36.972: INFO: namespace e2e-tests-projected-dc9lm deletion completed in 6.135139702s • [SLOW TEST:10.743 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:13:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:13:37.104: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 1 11:13:37.111: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:37.114: INFO: Number of nodes with available pods: 0 May 1 11:13:37.114: INFO: Node hunter-worker is running more than one daemon pod May 1 11:13:38.119: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:38.123: INFO: Number of nodes with available pods: 0 May 1 11:13:38.123: INFO: Node hunter-worker is running more than one daemon pod May 1 11:13:39.189: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:39.192: INFO: Number of nodes with available pods: 0 May 1 11:13:39.192: INFO: Node hunter-worker is running more than one daemon pod May 1 11:13:40.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:40.120: INFO: Number of nodes with available pods: 0 May 1 11:13:40.121: INFO: Node hunter-worker is running more than one daemon pod May 1 11:13:41.117: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:41.120: INFO: Number of nodes with available pods: 1 May 1 11:13:41.120: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:13:42.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:42.121: INFO: Number of nodes with available pods: 2 May 1 11:13:42.121: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 1 11:13:42.148: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:42.148: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:42.154: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:43.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:43.158: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:43.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:44.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:44.159: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:44.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:45.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:45.159: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:45.159: INFO: Pod daemon-set-8j79k is not available May 1 11:13:45.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:46.157: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:46.157: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:46.157: INFO: Pod daemon-set-8j79k is not available May 1 11:13:46.160: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:47.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:47.159: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:47.159: INFO: Pod daemon-set-8j79k is not available May 1 11:13:47.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:48.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:48.158: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:48.158: INFO: Pod daemon-set-8j79k is not available May 1 11:13:48.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:49.178: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:49.178: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:49.178: INFO: Pod daemon-set-8j79k is not available May 1 11:13:49.181: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:50.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:50.159: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:50.159: INFO: Pod daemon-set-8j79k is not available May 1 11:13:50.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:51.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:51.159: INFO: Wrong image for pod: daemon-set-8j79k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:51.159: INFO: Pod daemon-set-8j79k is not available May 1 11:13:51.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:52.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:52.159: INFO: Pod daemon-set-tzzwt is not available May 1 11:13:52.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:53.315: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:53.315: INFO: Pod daemon-set-tzzwt is not available May 1 11:13:53.319: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:54.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:54.158: INFO: Pod daemon-set-tzzwt is not available May 1 11:13:54.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:55.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:55.158: INFO: Pod daemon-set-tzzwt is not available May 1 11:13:55.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:56.171: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:56.175: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:57.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:57.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:58.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:58.158: INFO: Pod daemon-set-4mg9s is not available May 1 11:13:58.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:13:59.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:13:59.158: INFO: Pod daemon-set-4mg9s is not available May 1 11:13:59.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:00.159: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:14:00.159: INFO: Pod daemon-set-4mg9s is not available May 1 11:14:00.164: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:01.158: INFO: Wrong image for pod: daemon-set-4mg9s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 1 11:14:01.158: INFO: Pod daemon-set-4mg9s is not available May 1 11:14:01.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:02.159: INFO: Pod daemon-set-th5dw is not available May 1 11:14:02.163: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 1 11:14:02.167: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:02.170: INFO: Number of nodes with available pods: 1 May 1 11:14:02.170: INFO: Node hunter-worker is running more than one daemon pod May 1 11:14:03.175: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:03.179: INFO: Number of nodes with available pods: 1 May 1 11:14:03.179: INFO: Node hunter-worker is running more than one daemon pod May 1 11:14:04.176: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:04.179: INFO: Number of nodes with available pods: 1 May 1 11:14:04.179: INFO: Node hunter-worker is running more than one daemon pod May 1 11:14:05.195: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:14:05.199: INFO: Number of nodes with available pods: 2 May 1 11:14:05.199: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ch8gf, will wait for the garbage collector to delete the pods May 1 11:14:05.272: INFO: Deleting DaemonSet.extensions daemon-set took: 6.418688ms May 1 11:14:05.372: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.236043ms May 1 11:14:11.776: INFO: Number of nodes with available pods: 0 May 1 11:14:11.776: INFO: Number of running nodes: 0, number of available pods: 0 May 1 11:14:11.779: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ch8gf/daemonsets","resourceVersion":"8153630"},"items":null} May 1 11:14:11.782: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ch8gf/pods","resourceVersion":"8153630"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:14:11.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-ch8gf" for this suite. May 1 11:14:17.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:14:17.832: INFO: namespace: e2e-tests-daemonsets-ch8gf, resource: bindings, ignored listing per whitelist May 1 11:14:17.901: INFO: namespace e2e-tests-daemonsets-ch8gf deletion completed in 6.106162028s • [SLOW TEST:40.928 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:14:17.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 11:14:18.090: INFO: Waiting up to 5m0s for pod "pod-e6280343-8b9c-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-xmpjx" to be "success or failure" May 1 11:14:18.094: INFO: Pod "pod-e6280343-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270003ms May 1 11:14:20.098: INFO: Pod "pod-e6280343-8b9c-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008389925s May 1 11:14:22.102: INFO: Pod "pod-e6280343-8b9c-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011927732s STEP: Saw pod success May 1 11:14:22.102: INFO: Pod "pod-e6280343-8b9c-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:14:22.104: INFO: Trying to get logs from node hunter-worker2 pod pod-e6280343-8b9c-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:14:22.196: INFO: Waiting for pod pod-e6280343-8b9c-11ea-88a3-0242ac110017 to disappear May 1 11:14:22.249: INFO: Pod pod-e6280343-8b9c-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:14:22.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xmpjx" for this suite. May 1 11:14:28.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:14:28.328: INFO: namespace: e2e-tests-emptydir-xmpjx, resource: bindings, ignored listing per whitelist May 1 11:14:28.408: INFO: namespace e2e-tests-emptydir-xmpjx deletion completed in 6.154694723s • [SLOW TEST:10.507 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:14:28.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:14:28.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 1 11:14:28.612: INFO: stderr: "" May 1 11:14:28.612: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 1 11:14:28.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mt4t6' May 1 11:14:32.771: INFO: stderr: "" May 1 11:14:32.771: INFO: stdout: "replicationcontroller/redis-master created\n" May 1 11:14:32.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mt4t6' May 1 11:14:33.136: INFO: stderr: "" May 1 11:14:33.136: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 1 11:14:34.268: INFO: Selector matched 1 pods for map[app:redis] May 1 11:14:34.268: INFO: Found 0 / 1 May 1 11:14:35.140: INFO: Selector matched 1 pods for map[app:redis] May 1 11:14:35.140: INFO: Found 0 / 1 May 1 11:14:36.141: INFO: Selector matched 1 pods for map[app:redis] May 1 11:14:36.141: INFO: Found 1 / 1 May 1 11:14:36.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 11:14:36.145: INFO: Selector matched 1 pods for map[app:redis] May 1 11:14:36.145: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 11:14:36.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rfv94 --namespace=e2e-tests-kubectl-mt4t6' May 1 11:14:36.265: INFO: stderr: "" May 1 11:14:36.265: INFO: stdout: "Name: redis-master-rfv94\nNamespace: e2e-tests-kubectl-mt4t6\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Fri, 01 May 2020 11:14:32 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.55\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://5c1ec0c81db2246b35aeb771fce6d90df8d6a469e0173b218596272fa500abb7\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 May 2020 11:14:35 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ssrjf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ssrjf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ssrjf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-mt4t6/redis-master-rfv94 to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 1 11:14:36.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-mt4t6' May 1 11:14:36.403: INFO: stderr: "" May 1 11:14:36.403: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-mt4t6\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-rfv94\n" May 1 11:14:36.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-mt4t6' May 1 11:14:36.506: INFO: stderr: "" May 1 11:14:36.506: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-mt4t6\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.104.212\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.55:6379\nSession Affinity: None\nEvents: \n" May 1 11:14:36.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 1 11:14:36.624: INFO: stderr: "" May 1 11:14:36.624: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 May 2020 11:14:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 May 2020 11:14:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 May 2020 11:14:32 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 May 2020 11:14:32 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 46d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 46d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 1 11:14:36.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-mt4t6' May 1 11:14:36.729: INFO: stderr: "" May 1 11:14:36.729: INFO: stdout: "Name: e2e-tests-kubectl-mt4t6\nLabels: e2e-framework=kubectl\n e2e-run=11cd58fa-8b99-11ea-88a3-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:14:36.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mt4t6" for this suite. May 1 11:15:00.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:00.772: INFO: namespace: e2e-tests-kubectl-mt4t6, resource: bindings, ignored listing per whitelist May 1 11:15:00.870: INFO: namespace e2e-tests-kubectl-mt4t6 deletion completed in 24.13799222s • [SLOW TEST:32.462 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:00.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:15:05.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-cs65g" for this suite. May 1 11:15:11.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:11.557: INFO: namespace: e2e-tests-kubelet-test-cs65g, resource: bindings, ignored listing per whitelist May 1 11:15:11.564: INFO: namespace e2e-tests-kubelet-test-cs65g deletion completed in 6.280172646s • [SLOW TEST:10.694 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:11.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 1 11:15:11.661: INFO: Waiting up to 5m0s for pod "client-containers-06162e75-8b9d-11ea-88a3-0242ac110017" in namespace "e2e-tests-containers-c78j7" to be "success or failure" May 1 11:15:11.665: INFO: Pod "client-containers-06162e75-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169238ms May 1 11:15:13.669: INFO: Pod "client-containers-06162e75-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007653698s May 1 11:15:15.673: INFO: Pod "client-containers-06162e75-8b9d-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012158897s STEP: Saw pod success May 1 11:15:15.673: INFO: Pod "client-containers-06162e75-8b9d-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:15:15.677: INFO: Trying to get logs from node hunter-worker pod client-containers-06162e75-8b9d-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:15:15.697: INFO: Waiting for pod client-containers-06162e75-8b9d-11ea-88a3-0242ac110017 to disappear May 1 11:15:15.701: INFO: Pod client-containers-06162e75-8b9d-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:15:15.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-c78j7" for this suite. May 1 11:15:21.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:21.734: INFO: namespace: e2e-tests-containers-c78j7, resource: bindings, ignored listing per whitelist May 1 11:15:21.791: INFO: namespace e2e-tests-containers-c78j7 deletion completed in 6.08609449s • [SLOW TEST:10.226 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:21.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:15:21.890: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 1 11:15:21.944: INFO: Pod name sample-pod: Found 0 pods out of 1 May 1 11:15:26.948: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 11:15:26.948: INFO: Creating deployment "test-rolling-update-deployment" May 1 11:15:26.952: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 1 11:15:27.004: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 1 11:15:29.011: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 1 11:15:29.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 11:15:31.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723928527, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 11:15:33.018: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 11:15:33.026: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-pb8hq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pb8hq/deployments/test-rolling-update-deployment,UID:0f355b07-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8153969,Generation:1,CreationTimestamp:2020-05-01 11:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 11:15:27 +0000 UTC 2020-05-01 11:15:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 11:15:31 +0000 UTC 2020-05-01 11:15:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 11:15:33.030: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-pb8hq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pb8hq/replicasets/test-rolling-update-deployment-75db98fb4c,UID:0f3eca3a-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8153956,Generation:1,CreationTimestamp:2020-05-01 11:15:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0f355b07-8b9d-11ea-99e8-0242ac110002 0xc000fa5137 0xc000fa5138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 11:15:33.030: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 1 11:15:33.030: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-pb8hq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pb8hq/replicasets/test-rolling-update-controller,UID:0c316906-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8153968,Generation:2,CreationTimestamp:2020-05-01 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0f355b07-8b9d-11ea-99e8-0242ac110002 0xc000fa5077 0xc000fa5078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 11:15:33.034: INFO: Pod "test-rolling-update-deployment-75db98fb4c-b6mt7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-b6mt7,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-pb8hq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pb8hq/pods/test-rolling-update-deployment-75db98fb4c-b6mt7,UID:0f3fdda6-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8153955,Generation:0,CreationTimestamp:2020-05-01 11:15:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 0f3eca3a-8b9d-11ea-99e8-0242ac110002 0xc000fa5eb7 0xc000fa5eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zpbq4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zpbq4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zpbq4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000fa5f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000fa5f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:15:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:15:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:15:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:15:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.57,StartTime:2020-05-01 11:15:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 11:15:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9252e53d683552e8fbaaef38996d3f80be048f978e0d883c9ba661b0f0f82e1d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:15:33.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-pb8hq" for this suite. May 1 11:15:41.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:41.133: INFO: namespace: e2e-tests-deployment-pb8hq, resource: bindings, ignored listing per whitelist May 1 11:15:41.145: INFO: namespace e2e-tests-deployment-pb8hq deletion completed in 8.107976704s • [SLOW TEST:19.353 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:41.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 11:15:41.279: INFO: Waiting up to 5m0s for pod "pod-17bda6e9-8b9d-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-2xq5k" to be "success or failure" May 1 11:15:41.283: INFO: Pod "pod-17bda6e9-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495967ms May 1 11:15:43.287: INFO: Pod "pod-17bda6e9-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007900976s May 1 11:15:45.290: INFO: Pod "pod-17bda6e9-8b9d-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011517339s STEP: Saw pod success May 1 11:15:45.290: INFO: Pod "pod-17bda6e9-8b9d-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:15:45.293: INFO: Trying to get logs from node hunter-worker2 pod pod-17bda6e9-8b9d-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:15:45.307: INFO: Waiting for pod pod-17bda6e9-8b9d-11ea-88a3-0242ac110017 to disappear May 1 11:15:45.328: INFO: Pod pod-17bda6e9-8b9d-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:15:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2xq5k" for this suite. May 1 11:15:51.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:51.411: INFO: namespace: e2e-tests-emptydir-2xq5k, resource: bindings, ignored listing per whitelist May 1 11:15:51.427: INFO: namespace e2e-tests-emptydir-2xq5k deletion completed in 6.095394572s • [SLOW TEST:10.282 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:51.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 1 11:15:51.573: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lfm7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-lfm7p/configmaps/e2e-watch-test-watch-closed,UID:1ddd0126-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8154059,Generation:0,CreationTimestamp:2020-05-01 11:15:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 11:15:51.573: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lfm7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-lfm7p/configmaps/e2e-watch-test-watch-closed,UID:1ddd0126-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8154060,Generation:0,CreationTimestamp:2020-05-01 11:15:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 1 11:15:51.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lfm7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-lfm7p/configmaps/e2e-watch-test-watch-closed,UID:1ddd0126-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8154061,Generation:0,CreationTimestamp:2020-05-01 11:15:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 11:15:51.584: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-lfm7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-lfm7p/configmaps/e2e-watch-test-watch-closed,UID:1ddd0126-8b9d-11ea-99e8-0242ac110002,ResourceVersion:8154062,Generation:0,CreationTimestamp:2020-05-01 11:15:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:15:51.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-lfm7p" for this suite. May 1 11:15:57.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:15:57.630: INFO: namespace: e2e-tests-watch-lfm7p, resource: bindings, ignored listing per whitelist May 1 11:15:57.674: INFO: namespace e2e-tests-watch-lfm7p deletion completed in 6.086760207s • [SLOW TEST:6.247 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:15:57.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 11:15:58.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4b6m2' May 1 11:15:58.128: INFO: stderr: "" May 1 11:15:58.128: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 1 11:16:03.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4b6m2 -o json' May 1 11:16:03.282: INFO: stderr: "" May 1 11:16:03.282: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-01T11:15:58Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-4b6m2\",\n \"resourceVersion\": \"8154097\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-4b6m2/pods/e2e-test-nginx-pod\",\n \"uid\": \"21c8b6ee-8b9d-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j2jsz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j2jsz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j2jsz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T11:15:58Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T11:16:01Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T11:16:01Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T11:15:58Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://20cd028223e57aaf98fead6316a88c6dc2086db9f26bf920857a5c437a1906d2\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-01T11:16:00Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.58\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-01T11:15:58Z\"\n }\n}\n" STEP: replace the image in the pod May 1 11:16:03.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-4b6m2' May 1 11:16:03.589: INFO: stderr: "" May 1 11:16:03.589: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 1 11:16:03.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4b6m2' May 1 11:16:11.266: INFO: stderr: "" May 1 11:16:11.266: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:16:11.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4b6m2" for this suite. May 1 11:16:17.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:16:17.347: INFO: namespace: e2e-tests-kubectl-4b6m2, resource: bindings, ignored listing per whitelist May 1 11:16:17.384: INFO: namespace e2e-tests-kubectl-4b6m2 deletion completed in 6.114591191s • [SLOW TEST:19.709 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:16:17.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:16:37.528: INFO: Container started at 2020-05-01 11:16:20 +0000 UTC, pod became ready at 2020-05-01 11:16:36 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:16:37.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hr2h6" for this suite. May 1 11:16:59.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:16:59.607: INFO: namespace: e2e-tests-container-probe-hr2h6, resource: bindings, ignored listing per whitelist May 1 11:16:59.612: INFO: namespace e2e-tests-container-probe-hr2h6 deletion completed in 22.080195385s • [SLOW TEST:42.228 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:16:59.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 11:16:59.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lmxpk' May 1 11:17:00.035: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 11:17:00.035: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 1 11:17:04.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lmxpk' May 1 11:17:04.424: INFO: stderr: "" May 1 11:17:04.424: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:17:04.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lmxpk" for this suite. May 1 11:17:26.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:17:26.462: INFO: namespace: e2e-tests-kubectl-lmxpk, resource: bindings, ignored listing per whitelist May 1 11:17:26.516: INFO: namespace e2e-tests-kubectl-lmxpk deletion completed in 22.08876668s • [SLOW TEST:26.904 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:17:26.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 11:17:26.671: INFO: Waiting up to 5m0s for pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-bw9qz" to be "success or failure" May 1 11:17:26.724: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.317469ms May 1 11:17:28.742: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07156491s May 1 11:17:30.788: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117782049s May 1 11:17:32.793: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.122269268s May 1 11:17:34.796: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125778082s STEP: Saw pod success May 1 11:17:34.796: INFO: Pod "pod-568ebc73-8b9d-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:17:34.799: INFO: Trying to get logs from node hunter-worker2 pod pod-568ebc73-8b9d-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:17:35.457: INFO: Waiting for pod pod-568ebc73-8b9d-11ea-88a3-0242ac110017 to disappear May 1 11:17:35.754: INFO: Pod pod-568ebc73-8b9d-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:17:35.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bw9qz" for this suite. May 1 11:17:41.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:17:41.804: INFO: namespace: e2e-tests-emptydir-bw9qz, resource: bindings, ignored listing per whitelist May 1 11:17:41.852: INFO: namespace e2e-tests-emptydir-bw9qz deletion completed in 6.09414799s • [SLOW TEST:15.336 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:17:41.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 11:17:52.187: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:17:52.194: INFO: Pod pod-with-prestop-http-hook still exists May 1 11:17:54.194: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:17:54.199: INFO: Pod pod-with-prestop-http-hook still exists May 1 11:17:56.194: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:17:56.198: INFO: Pod pod-with-prestop-http-hook still exists May 1 11:17:58.194: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:17:58.400: INFO: Pod pod-with-prestop-http-hook still exists May 1 11:18:00.194: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:18:00.198: INFO: Pod pod-with-prestop-http-hook still exists May 1 11:18:02.196: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 11:18:02.200: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:18:02.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-znl7w" for this suite. May 1 11:18:16.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:18:16.297: INFO: namespace: e2e-tests-container-lifecycle-hook-znl7w, resource: bindings, ignored listing per whitelist May 1 11:18:16.330: INFO: namespace e2e-tests-container-lifecycle-hook-znl7w deletion completed in 14.121995737s • [SLOW TEST:34.477 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:18:16.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4cgmp May 1 11:18:20.697: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4cgmp STEP: checking the pod's current state and verifying that restartCount is present May 1 11:18:20.700: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:22:21.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4cgmp" for this suite. May 1 11:22:27.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:22:27.914: INFO: namespace: e2e-tests-container-probe-4cgmp, resource: bindings, ignored listing per whitelist May 1 11:22:27.950: INFO: namespace e2e-tests-container-probe-4cgmp deletion completed in 6.09528106s • [SLOW TEST:251.620 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:22:27.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-db7sx STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 11:22:28.037: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 11:22:54.187: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.60:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-db7sx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 11:22:54.188: INFO: >>> kubeConfig: /root/.kube/config I0501 11:22:54.229492 6 log.go:172] (0xc000ab0580) (0xc0019deb40) Create stream I0501 11:22:54.229529 6 log.go:172] (0xc000ab0580) (0xc0019deb40) Stream added, broadcasting: 1 I0501 11:22:54.232103 6 log.go:172] (0xc000ab0580) Reply frame received for 1 I0501 11:22:54.232147 6 log.go:172] (0xc000ab0580) (0xc0022b0140) Create stream I0501 11:22:54.232164 6 log.go:172] (0xc000ab0580) (0xc0022b0140) Stream added, broadcasting: 3 I0501 11:22:54.233397 6 log.go:172] (0xc000ab0580) Reply frame received for 3 I0501 11:22:54.233464 6 log.go:172] (0xc000ab0580) (0xc0021865a0) Create stream I0501 11:22:54.233495 6 log.go:172] (0xc000ab0580) (0xc0021865a0) Stream added, broadcasting: 5 I0501 11:22:54.234934 6 log.go:172] (0xc000ab0580) Reply frame received for 5 I0501 11:22:54.338045 6 log.go:172] (0xc000ab0580) Data frame received for 5 I0501 11:22:54.338105 6 log.go:172] (0xc0021865a0) (5) Data frame handling I0501 11:22:54.338160 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 11:22:54.338189 6 log.go:172] (0xc0022b0140) (3) Data frame handling I0501 11:22:54.338215 6 log.go:172] (0xc0022b0140) (3) Data frame sent I0501 11:22:54.338244 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 11:22:54.338302 6 log.go:172] (0xc0022b0140) (3) Data frame handling I0501 11:22:54.339879 6 log.go:172] (0xc000ab0580) Data frame received for 1 I0501 11:22:54.339921 6 log.go:172] (0xc0019deb40) (1) Data frame handling I0501 11:22:54.339967 6 log.go:172] (0xc0019deb40) (1) Data frame sent I0501 11:22:54.340010 6 log.go:172] (0xc000ab0580) (0xc0019deb40) Stream removed, broadcasting: 1 I0501 11:22:54.340043 6 log.go:172] (0xc000ab0580) Go away received I0501 11:22:54.340166 6 log.go:172] (0xc000ab0580) (0xc0019deb40) Stream removed, broadcasting: 1 I0501 11:22:54.340213 6 log.go:172] (0xc000ab0580) (0xc0022b0140) Stream removed, broadcasting: 3 I0501 11:22:54.340241 6 log.go:172] (0xc000ab0580) (0xc0021865a0) Stream removed, broadcasting: 5 May 1 11:22:54.340: INFO: Found all expected endpoints: [netserver-0] May 1 11:22:54.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.94:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-db7sx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 11:22:54.343: INFO: >>> kubeConfig: /root/.kube/config I0501 11:22:54.376470 6 log.go:172] (0xc000b7c2c0) (0xc001da8460) Create stream I0501 11:22:54.376517 6 log.go:172] (0xc000b7c2c0) (0xc001da8460) Stream added, broadcasting: 1 I0501 11:22:54.378990 6 log.go:172] (0xc000b7c2c0) Reply frame received for 1 I0501 11:22:54.379029 6 log.go:172] (0xc000b7c2c0) (0xc001da8500) Create stream I0501 11:22:54.379056 6 log.go:172] (0xc000b7c2c0) (0xc001da8500) Stream added, broadcasting: 3 I0501 11:22:54.379904 6 log.go:172] (0xc000b7c2c0) Reply frame received for 3 I0501 11:22:54.379929 6 log.go:172] (0xc000b7c2c0) (0xc0019dec80) Create stream I0501 11:22:54.379938 6 log.go:172] (0xc000b7c2c0) (0xc0019dec80) Stream added, broadcasting: 5 I0501 11:22:54.380819 6 log.go:172] (0xc000b7c2c0) Reply frame received for 5 I0501 11:22:54.451684 6 log.go:172] (0xc000b7c2c0) Data frame received for 3 I0501 11:22:54.451730 6 log.go:172] (0xc001da8500) (3) Data frame handling I0501 11:22:54.451753 6 log.go:172] (0xc000b7c2c0) Data frame received for 5 I0501 11:22:54.451779 6 log.go:172] (0xc0019dec80) (5) Data frame handling I0501 11:22:54.451808 6 log.go:172] (0xc001da8500) (3) Data frame sent I0501 11:22:54.451826 6 log.go:172] (0xc000b7c2c0) Data frame received for 3 I0501 11:22:54.451834 6 log.go:172] (0xc001da8500) (3) Data frame handling I0501 11:22:54.453700 6 log.go:172] (0xc000b7c2c0) Data frame received for 1 I0501 11:22:54.453740 6 log.go:172] (0xc001da8460) (1) Data frame handling I0501 11:22:54.453759 6 log.go:172] (0xc001da8460) (1) Data frame sent I0501 11:22:54.453775 6 log.go:172] (0xc000b7c2c0) (0xc001da8460) Stream removed, broadcasting: 1 I0501 11:22:54.453805 6 log.go:172] (0xc000b7c2c0) Go away received I0501 11:22:54.453910 6 log.go:172] (0xc000b7c2c0) (0xc001da8460) Stream removed, broadcasting: 1 I0501 11:22:54.453946 6 log.go:172] (0xc000b7c2c0) (0xc001da8500) Stream removed, broadcasting: 3 I0501 11:22:54.453963 6 log.go:172] (0xc000b7c2c0) (0xc0019dec80) Stream removed, broadcasting: 5 May 1 11:22:54.453: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:22:54.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-db7sx" for this suite. May 1 11:23:16.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:23:16.582: INFO: namespace: e2e-tests-pod-network-test-db7sx, resource: bindings, ignored listing per whitelist May 1 11:23:16.604: INFO: namespace e2e-tests-pod-network-test-db7sx deletion completed in 22.146145617s • [SLOW TEST:48.654 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:23:16.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:23:16.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-lcmjq" to be "success or failure" May 1 11:23:16.740: INFO: Pod "downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.746222ms May 1 11:23:18.753: INFO: Pod "downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029678765s May 1 11:23:20.758: INFO: Pod "downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034307284s STEP: Saw pod success May 1 11:23:20.758: INFO: Pod "downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:23:20.762: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:23:20.791: INFO: Waiting for pod downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:23:20.808: INFO: Pod downwardapi-volume-27365776-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:23:20.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lcmjq" for this suite. May 1 11:23:26.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:23:26.839: INFO: namespace: e2e-tests-downward-api-lcmjq, resource: bindings, ignored listing per whitelist May 1 11:23:26.900: INFO: namespace e2e-tests-downward-api-lcmjq deletion completed in 6.087574075s • [SLOW TEST:10.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:23:26.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-2d5b40e7-8b9e-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:23:27.037: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-6dqpf" to be "success or failure" May 1 11:23:27.085: INFO: Pod "pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 47.875474ms May 1 11:23:29.090: INFO: Pod "pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052464342s May 1 11:23:31.095: INFO: Pod "pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057097496s STEP: Saw pod success May 1 11:23:31.095: INFO: Pod "pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:23:31.098: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 11:23:31.135: INFO: Waiting for pod pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:23:31.149: INFO: Pod pod-projected-configmaps-2d5be938-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:23:31.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6dqpf" for this suite. May 1 11:23:37.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:23:37.214: INFO: namespace: e2e-tests-projected-6dqpf, resource: bindings, ignored listing per whitelist May 1 11:23:37.259: INFO: namespace e2e-tests-projected-6dqpf deletion completed in 6.106762527s • [SLOW TEST:10.359 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:23:37.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 11:23:37.411: INFO: Waiting up to 5m0s for pod "pod-33886ef5-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-6r5nq" to be "success or failure" May 1 11:23:37.415: INFO: Pod "pod-33886ef5-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053145ms May 1 11:23:39.419: INFO: Pod "pod-33886ef5-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008418968s May 1 11:23:41.423: INFO: Pod "pod-33886ef5-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01207334s STEP: Saw pod success May 1 11:23:41.423: INFO: Pod "pod-33886ef5-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:23:41.426: INFO: Trying to get logs from node hunter-worker pod pod-33886ef5-8b9e-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:23:41.445: INFO: Waiting for pod pod-33886ef5-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:23:41.496: INFO: Pod pod-33886ef5-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:23:41.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6r5nq" for this suite. May 1 11:23:47.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:23:47.573: INFO: namespace: e2e-tests-emptydir-6r5nq, resource: bindings, ignored listing per whitelist May 1 11:23:47.617: INFO: namespace e2e-tests-emptydir-6r5nq deletion completed in 6.117091734s • [SLOW TEST:10.358 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:23:47.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-wp7rg STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-wp7rg STEP: Deleting pre-stop pod May 1 11:24:00.840: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:24:00.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-wp7rg" for this suite. May 1 11:24:38.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:24:38.985: INFO: namespace: e2e-tests-prestop-wp7rg, resource: bindings, ignored listing per whitelist May 1 11:24:38.990: INFO: namespace e2e-tests-prestop-wp7rg deletion completed in 38.127592123s • [SLOW TEST:51.373 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:24:38.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 1 11:24:39.629: INFO: created pod pod-service-account-defaultsa May 1 11:24:39.629: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 1 11:24:39.636: INFO: created pod pod-service-account-mountsa May 1 11:24:39.636: INFO: pod pod-service-account-mountsa service account token volume mount: true May 1 11:24:39.695: INFO: created pod pod-service-account-nomountsa May 1 11:24:39.695: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 1 11:24:39.702: INFO: created pod pod-service-account-defaultsa-mountspec May 1 11:24:39.702: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 1 11:24:39.734: INFO: created pod pod-service-account-mountsa-mountspec May 1 11:24:39.734: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 1 11:24:39.743: INFO: created pod pod-service-account-nomountsa-mountspec May 1 11:24:39.743: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 1 11:24:39.781: INFO: created pod pod-service-account-defaultsa-nomountspec May 1 11:24:39.781: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 1 11:24:39.871: INFO: created pod pod-service-account-mountsa-nomountspec May 1 11:24:39.871: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 1 11:24:39.882: INFO: created pod pod-service-account-nomountsa-nomountspec May 1 11:24:39.882: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:24:39.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-gjv8p" for this suite. May 1 11:25:18.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:25:18.059: INFO: namespace: e2e-tests-svcaccounts-gjv8p, resource: bindings, ignored listing per whitelist May 1 11:25:18.080: INFO: namespace e2e-tests-svcaccounts-gjv8p deletion completed in 38.111456775s • [SLOW TEST:39.089 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:25:18.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 1 11:25:18.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 1 11:25:18.611: INFO: stderr: "" May 1 11:25:18.611: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:25:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kcgwt" for this suite. May 1 11:25:24.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:25:24.922: INFO: namespace: e2e-tests-kubectl-kcgwt, resource: bindings, ignored listing per whitelist May 1 11:25:25.016: INFO: namespace e2e-tests-kubectl-kcgwt deletion completed in 6.400488311s • [SLOW TEST:6.936 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:25:25.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-73c593f2-8b9e-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:25:25.204: INFO: Waiting up to 5m0s for pod "pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-wc6qt" to be "success or failure" May 1 11:25:25.250: INFO: Pod "pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.222379ms May 1 11:25:27.254: INFO: Pod "pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05053779s May 1 11:25:29.258: INFO: Pod "pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053893008s STEP: Saw pod success May 1 11:25:29.258: INFO: Pod "pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:25:29.260: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 11:25:29.307: INFO: Waiting for pod pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:25:29.315: INFO: Pod pod-secrets-73c61f9d-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:25:29.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wc6qt" for this suite. May 1 11:25:35.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:25:35.360: INFO: namespace: e2e-tests-secrets-wc6qt, resource: bindings, ignored listing per whitelist May 1 11:25:35.407: INFO: namespace e2e-tests-secrets-wc6qt deletion completed in 6.088281999s • [SLOW TEST:10.390 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:25:35.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 11:25:35.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p2mrh' May 1 11:25:43.851: INFO: stderr: "" May 1 11:25:43.851: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 1 11:25:43.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p2mrh' May 1 11:25:48.980: INFO: stderr: "" May 1 11:25:48.980: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:25:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p2mrh" for this suite. May 1 11:25:55.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:25:55.129: INFO: namespace: e2e-tests-kubectl-p2mrh, resource: bindings, ignored listing per whitelist May 1 11:25:55.220: INFO: namespace e2e-tests-kubectl-p2mrh deletion completed in 6.176992294s • [SLOW TEST:19.813 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:25:55.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:25:55.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nq489" for this suite. May 1 11:26:17.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:26:17.432: INFO: namespace: e2e-tests-pods-nq489, resource: bindings, ignored listing per whitelist May 1 11:26:17.464: INFO: namespace e2e-tests-pods-nq489 deletion completed in 22.093019608s • [SLOW TEST:22.243 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:26:17.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-72qs STEP: Creating a pod to test atomic-volume-subpath May 1 11:26:17.598: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-72qs" in namespace "e2e-tests-subpath-w9s5c" to be "success or failure" May 1 11:26:17.602: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801391ms May 1 11:26:19.607: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008840643s May 1 11:26:21.611: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013186234s May 1 11:26:23.756: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158047797s May 1 11:26:25.761: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 8.163263305s May 1 11:26:27.766: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 10.167655812s May 1 11:26:29.770: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 12.171789246s May 1 11:26:31.775: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 14.176694133s May 1 11:26:33.786: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 16.187488579s May 1 11:26:35.790: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 18.192325162s May 1 11:26:37.794: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 20.196066243s May 1 11:26:39.799: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 22.200500089s May 1 11:26:41.803: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Running", Reason="", readiness=false. Elapsed: 24.204501189s May 1 11:26:43.999: INFO: Pod "pod-subpath-test-projected-72qs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.400656765s STEP: Saw pod success May 1 11:26:43.999: INFO: Pod "pod-subpath-test-projected-72qs" satisfied condition "success or failure" May 1 11:26:44.001: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-72qs container test-container-subpath-projected-72qs: STEP: delete the pod May 1 11:26:44.057: INFO: Waiting for pod pod-subpath-test-projected-72qs to disappear May 1 11:26:44.064: INFO: Pod pod-subpath-test-projected-72qs no longer exists STEP: Deleting pod pod-subpath-test-projected-72qs May 1 11:26:44.064: INFO: Deleting pod "pod-subpath-test-projected-72qs" in namespace "e2e-tests-subpath-w9s5c" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:26:44.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-w9s5c" for this suite. May 1 11:26:50.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:26:50.135: INFO: namespace: e2e-tests-subpath-w9s5c, resource: bindings, ignored listing per whitelist May 1 11:26:50.167: INFO: namespace e2e-tests-subpath-w9s5c deletion completed in 6.098966269s • [SLOW TEST:32.702 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:26:50.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 1 11:26:50.344: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-mh676" to be "success or failure" May 1 11:26:50.352: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417179ms May 1 11:26:52.356: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012217651s May 1 11:26:54.360: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.016244149s May 1 11:26:56.365: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021273429s STEP: Saw pod success May 1 11:26:56.365: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 1 11:26:56.368: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 1 11:26:56.424: INFO: Waiting for pod pod-host-path-test to disappear May 1 11:26:56.430: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:26:56.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-mh676" for this suite. May 1 11:27:02.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:27:02.514: INFO: namespace: e2e-tests-hostpath-mh676, resource: bindings, ignored listing per whitelist May 1 11:27:02.538: INFO: namespace e2e-tests-hostpath-mh676 deletion completed in 6.104755337s • [SLOW TEST:12.370 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:27:02.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-ade6d017-8b9e-11ea-88a3-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-ade6d08b-8b9e-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ade6d017-8b9e-11ea-88a3-0242ac110017 STEP: Updating configmap cm-test-opt-upd-ade6d08b-8b9e-11ea-88a3-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-ade6d0b9-8b9e-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:27:13.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dbjv5" for this suite. May 1 11:27:37.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:27:37.672: INFO: namespace: e2e-tests-projected-dbjv5, resource: bindings, ignored listing per whitelist May 1 11:27:37.704: INFO: namespace e2e-tests-projected-dbjv5 deletion completed in 24.272785162s • [SLOW TEST:35.166 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:27:37.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-7bmsf/configmap-test-c2d1223f-8b9e-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:27:37.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-7bmsf" to be "success or failure" May 1 11:27:37.853: INFO: Pod "pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.149297ms May 1 11:27:39.856: INFO: Pod "pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039516847s May 1 11:27:41.859: INFO: Pod "pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042890705s STEP: Saw pod success May 1 11:27:41.859: INFO: Pod "pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:27:41.862: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017 container env-test: STEP: delete the pod May 1 11:27:41.896: INFO: Waiting for pod pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:27:41.910: INFO: Pod pod-configmaps-c2d43f54-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:27:41.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7bmsf" for this suite. May 1 11:27:47.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:27:47.953: INFO: namespace: e2e-tests-configmap-7bmsf, resource: bindings, ignored listing per whitelist May 1 11:27:48.001: INFO: namespace e2e-tests-configmap-7bmsf deletion completed in 6.087775928s • [SLOW TEST:10.297 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:27:48.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-v4b94 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v4b94 to expose endpoints map[] May 1 11:27:48.156: INFO: Get endpoints failed (16.663717ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 1 11:27:49.160: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v4b94 exposes endpoints map[] (1.020085973s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-v4b94 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v4b94 to expose endpoints map[pod1:[80]] May 1 11:27:53.248: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v4b94 exposes endpoints map[pod1:[80]] (4.081503984s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-v4b94 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v4b94 to expose endpoints map[pod1:[80] pod2:[80]] May 1 11:27:56.337: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v4b94 exposes endpoints map[pod1:[80] pod2:[80]] (3.084457742s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-v4b94 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v4b94 to expose endpoints map[pod2:[80]] May 1 11:27:57.436: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v4b94 exposes endpoints map[pod2:[80]] (1.094219342s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-v4b94 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-v4b94 to expose endpoints map[] May 1 11:27:58.453: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-v4b94 exposes endpoints map[] (1.013715495s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:27:58.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-v4b94" for this suite. May 1 11:28:04.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:28:04.666: INFO: namespace: e2e-tests-services-v4b94, resource: bindings, ignored listing per whitelist May 1 11:28:04.668: INFO: namespace e2e-tests-services-v4b94 deletion completed in 6.085562919s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:16.667 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:28:04.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d2e89888-8b9e-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:28:04.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-5bhhk" to be "success or failure" May 1 11:28:04.801: INFO: Pod "pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.577055ms May 1 11:28:06.806: INFO: Pod "pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019083441s May 1 11:28:08.810: INFO: Pod "pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022987327s STEP: Saw pod success May 1 11:28:08.810: INFO: Pod "pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:28:08.813: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 11:28:08.841: INFO: Waiting for pod pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:28:08.851: INFO: Pod pod-configmaps-d2e911ad-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:28:08.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5bhhk" for this suite. May 1 11:28:14.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:28:14.914: INFO: namespace: e2e-tests-configmap-5bhhk, resource: bindings, ignored listing per whitelist May 1 11:28:14.959: INFO: namespace e2e-tests-configmap-5bhhk deletion completed in 6.105492059s • [SLOW TEST:10.291 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:28:14.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 1 11:28:15.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:15.362: INFO: stderr: "" May 1 11:28:15.362: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 11:28:15.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:15.487: INFO: stderr: "" May 1 11:28:15.487: INFO: stdout: "update-demo-nautilus-2c8l7 update-demo-nautilus-hxztd " May 1 11:28:15.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2c8l7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:15.594: INFO: stderr: "" May 1 11:28:15.594: INFO: stdout: "" May 1 11:28:15.594: INFO: update-demo-nautilus-2c8l7 is created but not running May 1 11:28:20.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:20.709: INFO: stderr: "" May 1 11:28:20.709: INFO: stdout: "update-demo-nautilus-2c8l7 update-demo-nautilus-hxztd " May 1 11:28:20.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2c8l7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:20.812: INFO: stderr: "" May 1 11:28:20.812: INFO: stdout: "true" May 1 11:28:20.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2c8l7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:20.923: INFO: stderr: "" May 1 11:28:20.923: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 11:28:20.923: INFO: validating pod update-demo-nautilus-2c8l7 May 1 11:28:20.927: INFO: got data: { "image": "nautilus.jpg" } May 1 11:28:20.927: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 11:28:20.927: INFO: update-demo-nautilus-2c8l7 is verified up and running May 1 11:28:20.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxztd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:21.044: INFO: stderr: "" May 1 11:28:21.044: INFO: stdout: "true" May 1 11:28:21.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxztd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:21.143: INFO: stderr: "" May 1 11:28:21.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 11:28:21.143: INFO: validating pod update-demo-nautilus-hxztd May 1 11:28:21.148: INFO: got data: { "image": "nautilus.jpg" } May 1 11:28:21.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 11:28:21.148: INFO: update-demo-nautilus-hxztd is verified up and running STEP: using delete to clean up resources May 1 11:28:21.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:21.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 11:28:21.252: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 11:28:21.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ltq2t' May 1 11:28:21.368: INFO: stderr: "No resources found.\n" May 1 11:28:21.369: INFO: stdout: "" May 1 11:28:21.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ltq2t -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 11:28:21.483: INFO: stderr: "" May 1 11:28:21.483: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:28:21.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ltq2t" for this suite. May 1 11:28:43.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:28:43.716: INFO: namespace: e2e-tests-kubectl-ltq2t, resource: bindings, ignored listing per whitelist May 1 11:28:43.747: INFO: namespace e2e-tests-kubectl-ltq2t deletion completed in 22.261007834s • [SLOW TEST:28.787 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:28:43.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:28:43.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-bdfcc" to be "success or failure" May 1 11:28:43.888: INFO: Pod "downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61777ms May 1 11:28:45.892: INFO: Pod "downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01248908s May 1 11:28:47.896: INFO: Pod "downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016260396s STEP: Saw pod success May 1 11:28:47.896: INFO: Pod "downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:28:47.898: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:28:47.969: INFO: Waiting for pod downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017 to disappear May 1 11:28:47.978: INFO: Pod downwardapi-volume-ea317944-8b9e-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:28:47.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bdfcc" for this suite. May 1 11:28:53.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:28:54.039: INFO: namespace: e2e-tests-downward-api-bdfcc, resource: bindings, ignored listing per whitelist May 1 11:28:54.097: INFO: namespace e2e-tests-downward-api-bdfcc deletion completed in 6.115198448s • [SLOW TEST:10.350 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:28:54.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 11:28:54.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bcdh8' May 1 11:28:54.281: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 11:28:54.281: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 1 11:28:54.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-bcdh8' May 1 11:28:54.418: INFO: stderr: "" May 1 11:28:54.418: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:28:54.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcdh8" for this suite. May 1 11:29:00.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:29:00.545: INFO: namespace: e2e-tests-kubectl-bcdh8, resource: bindings, ignored listing per whitelist May 1 11:29:00.547: INFO: namespace e2e-tests-kubectl-bcdh8 deletion completed in 6.111936525s • [SLOW TEST:6.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:29:00.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 11:29:05.244: INFO: Successfully updated pod "annotationupdatef4365a27-8b9e-11ea-88a3-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:29:09.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cqsmh" for this suite. May 1 11:29:33.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:29:33.428: INFO: namespace: e2e-tests-downward-api-cqsmh, resource: bindings, ignored listing per whitelist May 1 11:29:33.433: INFO: namespace e2e-tests-downward-api-cqsmh deletion completed in 24.147158318s • [SLOW TEST:32.886 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:29:33.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 11:29:33.566: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 11:29:33.575: INFO: Waiting for terminating namespaces to be deleted... May 1 11:29:33.577: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 11:29:33.582: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 11:29:33.582: INFO: Container kube-proxy ready: true, restart count 0 May 1 11:29:33.582: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 11:29:33.582: INFO: Container kindnet-cni ready: true, restart count 0 May 1 11:29:33.582: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 11:29:33.582: INFO: Container coredns ready: true, restart count 0 May 1 11:29:33.582: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 11:29:33.586: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 11:29:33.586: INFO: Container kindnet-cni ready: true, restart count 0 May 1 11:29:33.586: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 11:29:33.586: INFO: Container coredns ready: true, restart count 0 May 1 11:29:33.586: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 11:29:33.586: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0a43fc59-8b9f-11ea-88a3-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0a43fc59-8b9f-11ea-88a3-0242ac110017 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0a43fc59-8b9f-11ea-88a3-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:29:41.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-cdbqg" for this suite. May 1 11:29:51.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:29:51.802: INFO: namespace: e2e-tests-sched-pred-cdbqg, resource: bindings, ignored listing per whitelist May 1 11:29:51.814: INFO: namespace e2e-tests-sched-pred-cdbqg deletion completed in 10.088425417s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:29:51.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 11:29:51.948: INFO: Waiting up to 5m0s for pod "pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-2hlj4" to be "success or failure" May 1 11:29:51.955: INFO: Pod "pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535235ms May 1 11:29:53.959: INFO: Pod "pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010424383s May 1 11:29:55.963: INFO: Pod "pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014472258s STEP: Saw pod success May 1 11:29:55.963: INFO: Pod "pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:29:55.966: INFO: Trying to get logs from node hunter-worker2 pod pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:29:56.008: INFO: Waiting for pod pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:29:56.021: INFO: Pod pod-12c8e4d8-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:29:56.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2hlj4" for this suite. May 1 11:30:02.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:30:02.058: INFO: namespace: e2e-tests-emptydir-2hlj4, resource: bindings, ignored listing per whitelist May 1 11:30:02.113: INFO: namespace e2e-tests-emptydir-2hlj4 deletion completed in 6.087869858s • [SLOW TEST:10.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:30:02.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:30:02.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-jltc6" to be "success or failure" May 1 11:30:02.279: INFO: Pod "downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.018135ms May 1 11:30:04.283: INFO: Pod "downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013376436s May 1 11:30:06.287: INFO: Pod "downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017588776s STEP: Saw pod success May 1 11:30:06.287: INFO: Pod "downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:30:06.290: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:30:06.341: INFO: Waiting for pod downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:30:06.344: INFO: Pod downwardapi-volume-18efa8ee-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:30:06.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jltc6" for this suite. May 1 11:30:12.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:30:12.516: INFO: namespace: e2e-tests-downward-api-jltc6, resource: bindings, ignored listing per whitelist May 1 11:30:12.520: INFO: namespace e2e-tests-downward-api-jltc6 deletion completed in 6.172234187s • [SLOW TEST:10.407 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:30:12.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 1 11:30:13.241: INFO: Waiting up to 5m0s for pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k" in namespace "e2e-tests-svcaccounts-fjhps" to be "success or failure" May 1 11:30:13.278: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k": Phase="Pending", Reason="", readiness=false. Elapsed: 37.013623ms May 1 11:30:15.497: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255569977s May 1 11:30:17.500: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258801346s May 1 11:30:19.504: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262684633s May 1 11:30:21.508: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267000469s STEP: Saw pod success May 1 11:30:21.508: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k" satisfied condition "success or failure" May 1 11:30:21.510: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k container token-test: STEP: delete the pod May 1 11:30:21.904: INFO: Waiting for pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k to disappear May 1 11:30:22.082: INFO: Pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-2r62k no longer exists STEP: Creating a pod to test consume service account root CA May 1 11:30:22.087: INFO: Waiting up to 5m0s for pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw" in namespace "e2e-tests-svcaccounts-fjhps" to be "success or failure" May 1 11:30:22.115: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw": Phase="Pending", Reason="", readiness=false. Elapsed: 27.786133ms May 1 11:30:24.123: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036422575s May 1 11:30:26.127: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04028496s May 1 11:30:28.131: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044062328s May 1 11:30:30.135: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048506675s STEP: Saw pod success May 1 11:30:30.135: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw" satisfied condition "success or failure" May 1 11:30:30.138: INFO: Trying to get logs from node hunter-worker pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw container root-ca-test: STEP: delete the pod May 1 11:30:30.204: INFO: Waiting for pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw to disappear May 1 11:30:30.226: INFO: Pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-99lnw no longer exists STEP: Creating a pod to test consume service account namespace May 1 11:30:30.232: INFO: Waiting up to 5m0s for pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs" in namespace "e2e-tests-svcaccounts-fjhps" to be "success or failure" May 1 11:30:30.237: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Pending", Reason="", readiness=false. Elapsed: 5.791512ms May 1 11:30:32.562: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330495968s May 1 11:30:34.566: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33443141s May 1 11:30:36.570: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338033202s May 1 11:30:38.706: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474424308s May 1 11:30:40.710: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.477893694s STEP: Saw pod success May 1 11:30:40.710: INFO: Pod "pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs" satisfied condition "success or failure" May 1 11:30:40.712: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs container namespace-test: STEP: delete the pod May 1 11:30:40.793: INFO: Waiting for pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs to disappear May 1 11:30:40.800: INFO: Pod pod-service-account-1f79ec47-8b9f-11ea-88a3-0242ac110017-57crs no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:30:40.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-fjhps" for this suite. May 1 11:30:46.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:30:46.843: INFO: namespace: e2e-tests-svcaccounts-fjhps, resource: bindings, ignored listing per whitelist May 1 11:30:46.901: INFO: namespace e2e-tests-svcaccounts-fjhps deletion completed in 6.097127477s • [SLOW TEST:34.381 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:30:46.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-339fceb0-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:30:47.179: INFO: Waiting up to 5m0s for pod "pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-c9m6f" to be "success or failure" May 1 11:30:47.190: INFO: Pod "pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.125926ms May 1 11:30:49.194: INFO: Pod "pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015537277s May 1 11:30:51.203: INFO: Pod "pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023927995s STEP: Saw pod success May 1 11:30:51.203: INFO: Pod "pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:30:51.206: INFO: Trying to get logs from node hunter-worker pod pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 11:30:51.366: INFO: Waiting for pod pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:30:51.394: INFO: Pod pod-secrets-33b42f61-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:30:51.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-c9m6f" for this suite. May 1 11:30:57.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:30:57.420: INFO: namespace: e2e-tests-secrets-c9m6f, resource: bindings, ignored listing per whitelist May 1 11:30:57.467: INFO: namespace e2e-tests-secrets-c9m6f deletion completed in 6.069149598s STEP: Destroying namespace "e2e-tests-secret-namespace-7svd7" for this suite. May 1 11:31:03.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:31:03.514: INFO: namespace: e2e-tests-secret-namespace-7svd7, resource: bindings, ignored listing per whitelist May 1 11:31:03.551: INFO: namespace e2e-tests-secret-namespace-7svd7 deletion completed in 6.084137869s • [SLOW TEST:16.650 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:31:03.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 11:31:03.689: INFO: Waiting up to 5m0s for pod "pod-3d89ae83-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-qcf4b" to be "success or failure" May 1 11:31:03.693: INFO: Pod "pod-3d89ae83-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551715ms May 1 11:31:05.697: INFO: Pod "pod-3d89ae83-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008479763s May 1 11:31:07.701: INFO: Pod "pod-3d89ae83-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012877222s STEP: Saw pod success May 1 11:31:07.701: INFO: Pod "pod-3d89ae83-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:31:07.704: INFO: Trying to get logs from node hunter-worker2 pod pod-3d89ae83-8b9f-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:31:07.749: INFO: Waiting for pod pod-3d89ae83-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:31:07.759: INFO: Pod pod-3d89ae83-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:31:07.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qcf4b" for this suite. May 1 11:31:13.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:31:13.794: INFO: namespace: e2e-tests-emptydir-qcf4b, resource: bindings, ignored listing per whitelist May 1 11:31:13.843: INFO: namespace e2e-tests-emptydir-qcf4b deletion completed in 6.080363129s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:31:13.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:31:14.054: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 1 11:31:14.065: INFO: Number of nodes with available pods: 0 May 1 11:31:14.065: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 1 11:31:14.155: INFO: Number of nodes with available pods: 0 May 1 11:31:14.155: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:15.158: INFO: Number of nodes with available pods: 0 May 1 11:31:15.158: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:16.158: INFO: Number of nodes with available pods: 0 May 1 11:31:16.159: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:17.158: INFO: Number of nodes with available pods: 0 May 1 11:31:17.158: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:18.159: INFO: Number of nodes with available pods: 1 May 1 11:31:18.159: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 1 11:31:18.203: INFO: Number of nodes with available pods: 1 May 1 11:31:18.203: INFO: Number of running nodes: 0, number of available pods: 1 May 1 11:31:19.359: INFO: Number of nodes with available pods: 0 May 1 11:31:19.359: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 1 11:31:19.407: INFO: Number of nodes with available pods: 0 May 1 11:31:19.407: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:20.411: INFO: Number of nodes with available pods: 0 May 1 11:31:20.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:21.411: INFO: Number of nodes with available pods: 0 May 1 11:31:21.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:22.411: INFO: Number of nodes with available pods: 0 May 1 11:31:22.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:23.411: INFO: Number of nodes with available pods: 0 May 1 11:31:23.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:24.410: INFO: Number of nodes with available pods: 0 May 1 11:31:24.410: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:25.411: INFO: Number of nodes with available pods: 0 May 1 11:31:25.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:26.410: INFO: Number of nodes with available pods: 0 May 1 11:31:26.410: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:27.411: INFO: Number of nodes with available pods: 0 May 1 11:31:27.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:28.449: INFO: Number of nodes with available pods: 0 May 1 11:31:28.449: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:29.411: INFO: Number of nodes with available pods: 0 May 1 11:31:29.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:30.411: INFO: Number of nodes with available pods: 0 May 1 11:31:30.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:31.411: INFO: Number of nodes with available pods: 0 May 1 11:31:31.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:32.431: INFO: Number of nodes with available pods: 0 May 1 11:31:32.431: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:33.411: INFO: Number of nodes with available pods: 0 May 1 11:31:33.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:34.411: INFO: Number of nodes with available pods: 0 May 1 11:31:34.411: INFO: Node hunter-worker is running more than one daemon pod May 1 11:31:35.411: INFO: Number of nodes with available pods: 1 May 1 11:31:35.411: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gfbtz, will wait for the garbage collector to delete the pods May 1 11:31:35.475: INFO: Deleting DaemonSet.extensions daemon-set took: 6.746866ms May 1 11:31:35.575: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.240563ms May 1 11:31:41.378: INFO: Number of nodes with available pods: 0 May 1 11:31:41.378: INFO: Number of running nodes: 0, number of available pods: 0 May 1 11:31:41.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gfbtz/daemonsets","resourceVersion":"8157061"},"items":null} May 1 11:31:41.383: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gfbtz/pods","resourceVersion":"8157061"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:31:41.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-gfbtz" for this suite. May 1 11:31:47.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:31:47.478: INFO: namespace: e2e-tests-daemonsets-gfbtz, resource: bindings, ignored listing per whitelist May 1 11:31:47.507: INFO: namespace e2e-tests-daemonsets-gfbtz deletion completed in 6.079654633s • [SLOW TEST:33.665 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:31:47.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-57bfd9c7-8b9f-11ea-88a3-0242ac110017 STEP: Creating secret with name s-test-opt-upd-57bfda38-8b9f-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-57bfd9c7-8b9f-11ea-88a3-0242ac110017 STEP: Updating secret s-test-opt-upd-57bfda38-8b9f-11ea-88a3-0242ac110017 STEP: Creating secret with name s-test-opt-create-57bfda5d-8b9f-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:31:57.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9dbb9" for this suite. May 1 11:32:19.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:32:19.920: INFO: namespace: e2e-tests-secrets-9dbb9, resource: bindings, ignored listing per whitelist May 1 11:32:19.927: INFO: namespace e2e-tests-secrets-9dbb9 deletion completed in 22.092462496s • [SLOW TEST:32.420 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:32:19.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:32:20.032: INFO: Creating deployment "nginx-deployment" May 1 11:32:20.073: INFO: Waiting for observed generation 1 May 1 11:32:22.193: INFO: Waiting for all required pods to come up May 1 11:32:22.199: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 1 11:32:32.210: INFO: Waiting for deployment "nginx-deployment" to complete May 1 11:32:32.215: INFO: Updating deployment "nginx-deployment" with a non-existent image May 1 11:32:32.220: INFO: Updating deployment nginx-deployment May 1 11:32:32.220: INFO: Waiting for observed generation 2 May 1 11:32:34.330: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 1 11:32:34.635: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 1 11:32:34.638: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 11:32:34.646: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 1 11:32:34.646: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 1 11:32:34.648: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 1 11:32:34.653: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 1 11:32:34.653: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 1 11:32:34.659: INFO: Updating deployment nginx-deployment May 1 11:32:34.659: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 1 11:32:35.283: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 1 11:32:36.028: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 11:32:38.871: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh9wz/deployments/nginx-deployment,UID:6b0db475-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157477,Generation:3,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-01 11:32:35 +0000 UTC 2020-05-01 11:32:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-01 11:32:36 +0000 UTC 2020-05-01 11:32:20 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 1 11:32:39.175: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh9wz/replicasets/nginx-deployment-5c98f8fb5,UID:7251734c-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157465,Generation:3,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6b0db475-8b9f-11ea-99e8-0242ac110002 0xc0023ce9f7 0xc0023ce9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 11:32:39.175: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 1 11:32:39.175: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zh9wz/replicasets/nginx-deployment-85ddf47c5d,UID:6b14b99e-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157474,Generation:3,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6b0db475-8b9f-11ea-99e8-0242ac110002 0xc0023ced67 0xc0023ced68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 1 11:32:39.409: INFO: Pod "nginx-deployment-5c98f8fb5-4mfpl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4mfpl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-4mfpl,UID:7497ca92-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157460,Generation:0,CreationTimestamp:2020-05-01 11:32:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002410d67 0xc002410d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002410de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002410e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.409: INFO: Pod "nginx-deployment-5c98f8fb5-79zsv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-79zsv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-79zsv,UID:7448c727-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157451,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002410e77 0xc002410e78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002410ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002410f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.409: INFO: Pod "nginx-deployment-5c98f8fb5-bwjqk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bwjqk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-bwjqk,UID:7448e735-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157528,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002410f87 0xc002410f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.410: INFO: Pod "nginx-deployment-5c98f8fb5-ccfvh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ccfvh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-ccfvh,UID:742520c5-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157479,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411130 0xc002411131}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024111b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024111d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.410: INFO: Pod "nginx-deployment-5c98f8fb5-ccs2v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ccs2v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-ccs2v,UID:725846bf-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157381,Generation:0,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411290 0xc002411291}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411310} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.410: INFO: Pod "nginx-deployment-5c98f8fb5-ctbr7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ctbr7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-ctbr7,UID:7255457c-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157366,Generation:0,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc0024113f0 0xc0024113f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.410: INFO: Pod "nginx-deployment-5c98f8fb5-fzgkb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fzgkb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-fzgkb,UID:73fc99dd-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157464,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411550 0xc002411551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024115d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024115f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.411: INFO: Pod "nginx-deployment-5c98f8fb5-hcq7x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hcq7x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-hcq7x,UID:7448db2a-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157449,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc0024116b0 0xc0024116b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411730} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.411: INFO: Pod "nginx-deployment-5c98f8fb5-j4hv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j4hv2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-j4hv2,UID:72804f25-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157397,Generation:0,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc0024117c7 0xc0024117c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411840} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.411: INFO: Pod "nginx-deployment-5c98f8fb5-nnbzw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nnbzw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-nnbzw,UID:72784d3b-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157394,Generation:0,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411920 0xc002411921}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024119a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024119c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.411: INFO: Pod "nginx-deployment-5c98f8fb5-p859k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p859k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-p859k,UID:74252734-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157489,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411a80 0xc002411a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.411: INFO: Pod "nginx-deployment-5c98f8fb5-rtq58" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rtq58,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-rtq58,UID:725835e0-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157367,Generation:0,CreationTimestamp:2020-05-01 11:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411be0 0xc002411be1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-5c98f8fb5-vb89g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vb89g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-5c98f8fb5-vb89g,UID:7448d983-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157527,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 7251734c-8b9f-11ea-99e8-0242ac110002 0xc002411d40 0xc002411d41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-2bfgv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2bfgv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-2bfgv,UID:6b180316-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157284,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc002411ea0 0xc002411ea1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002411f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002411f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.86,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2b7079d7c31c45e8f5816539da03f218849a41ec1547934f8655a61d441d3918}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-2msrq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2msrq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-2msrq,UID:73fcb0eb-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157473,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc002411ff7 0xc002411ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010ea070} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010ea090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-2tgpn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2tgpn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-2tgpn,UID:6b1b3ae0-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157320,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010ea147 0xc0010ea148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010ea1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010ea1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.114,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://be0af44a52209b7e218e478a310a781ec4ee6ecca1631398163fdefad047c1f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-46v7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-46v7v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-46v7v,UID:74253bfc-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157490,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010ea567 0xc0010ea568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010ea5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010ea600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-5bqfb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5bqfb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-5bqfb,UID:73fb7488-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157459,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010ea8d7 0xc0010ea8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eac20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eac40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-5lq29" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5lq29,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-5lq29,UID:7448a819-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157448,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010ead57 0xc0010ead58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eae20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eae40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.412: INFO: Pod "nginx-deployment-85ddf47c5d-cg64x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cg64x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-cg64x,UID:7448cc9a-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157450,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eaee7 0xc0010eaee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eafa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eafc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-dv9ns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dv9ns,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-dv9ns,UID:73fcc6f8-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157471,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb037 0xc0010eb038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-f8w5m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f8w5m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-f8w5m,UID:74253817-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157508,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb267 0xc0010eb268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb390} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-gcgsg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcgsg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-gcgsg,UID:7448a890-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157521,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb467 0xc0010eb468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-gtvmd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gtvmd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-gtvmd,UID:6b2139df-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157323,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb5b7 0xc0010eb5b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb630} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.89,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://046966e3db42a396058f7af969294b612b864a215f666ee3ab92e38a2d38f89a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-htt5k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-htt5k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-htt5k,UID:6b1b3867-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157303,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb717 0xc0010eb718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.87,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0f1c8aa3b2258d3a6fe8c2cae501a71ff4116a1da95258f2eab73aae7fc333d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-lqr6g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lqr6g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-lqr6g,UID:742527bd-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157517,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb877 0xc0010eb878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eb8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010eb910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.413: INFO: Pod "nginx-deployment-85ddf47c5d-nplnm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nplnm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-nplnm,UID:6b187b70-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157307,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc0010eb9c7 0xc0010eb9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010eba40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.112,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://097bfe648c88c1e36a3a4e95424b6c28280520296ac1cb21dd8ed393866045fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-pjcrn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pjcrn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-pjcrn,UID:6b1b3ce6-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157296,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9c0d7 0xc001e9c0d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c150} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.88,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fb1f6b12b60f4041dbd2b831e11c2185a6321048394585e0746c6663dd227e5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-pw6j8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pw6j8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-pw6j8,UID:742501f1-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157481,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9c2a7 0xc001e9c2a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-qbnqb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qbnqb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-qbnqb,UID:7448c240-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157452,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9c477 0xc001e9c478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c670} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-vfjhw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vfjhw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-vfjhw,UID:7448b137-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157523,Generation:0,CreationTimestamp:2020-05-01 11:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9c707 0xc001e9c708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d190} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 11:32:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-zgk9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zgk9x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-zgk9x,UID:6b216475-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157334,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9d367 0xc001e9d368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d5b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.116,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2415a3f3dfb91e4767dd508c931b583d68f16f187b9aa8cadcddab959a958b9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 1 11:32:39.414: INFO: Pod "nginx-deployment-85ddf47c5d-zqdz4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zqdz4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zh9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zh9wz/pods/nginx-deployment-85ddf47c5d-zqdz4,UID:6b1b3b44-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157310,Generation:0,CreationTimestamp:2020-05-01 11:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6b14b99e-8b9f-11ea-99e8-0242ac110002 0xc001e9d6c7 0xc001e9d6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dnj9x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dnj9x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dnj9x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d830} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 11:32:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.113,StartTime:2020-05-01 11:32:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-01 11:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://13cfd79a7a2fd93052da1ede2157a3d0f9d8244282fd6331b2a1ab470d14073d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:32:39.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zh9wz" for this suite. May 1 11:33:07.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:33:07.957: INFO: namespace: e2e-tests-deployment-zh9wz, resource: bindings, ignored listing per whitelist May 1 11:33:08.018: INFO: namespace e2e-tests-deployment-zh9wz deletion completed in 28.314777852s • [SLOW TEST:48.090 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:33:08.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:33:08.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-d7vn5" to be "success or failure" May 1 11:33:08.277: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.554721ms May 1 11:33:10.280: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030032819s May 1 11:33:12.284: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033755587s May 1 11:33:14.288: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.037564013s May 1 11:33:16.292: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 8.04157744s May 1 11:33:18.295: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 10.044592982s May 1 11:33:20.331: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 12.080603415s May 1 11:33:22.660: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 14.410166375s May 1 11:33:24.750: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.500479882s STEP: Saw pod success May 1 11:33:24.751: INFO: Pod "downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:33:24.753: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:33:24.936: INFO: Waiting for pod downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:33:25.247: INFO: Pod downwardapi-volume-87b6d9b9-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:33:25.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d7vn5" for this suite. May 1 11:33:31.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:33:31.388: INFO: namespace: e2e-tests-downward-api-d7vn5, resource: bindings, ignored listing per whitelist May 1 11:33:31.424: INFO: namespace e2e-tests-downward-api-d7vn5 deletion completed in 6.173138198s • [SLOW TEST:23.406 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:33:31.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:33:31.901: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 1 11:33:36.906: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 11:33:36.906: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 11:33:37.071: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-8czmq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8czmq/deployments/test-cleanup-deployment,UID:98e040fe-8b9f-11ea-99e8-0242ac110002,ResourceVersion:8157926,Generation:1,CreationTimestamp:2020-05-01 11:33:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 1 11:33:37.074: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:33:37.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8czmq" for this suite. May 1 11:33:43.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:33:43.289: INFO: namespace: e2e-tests-deployment-8czmq, resource: bindings, ignored listing per whitelist May 1 11:33:43.349: INFO: namespace e2e-tests-deployment-8czmq deletion completed in 6.212035659s • [SLOW TEST:11.924 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:33:43.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-t27fj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-t27fj to expose endpoints map[] May 1 11:33:43.495: INFO: Get endpoints failed (13.281369ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 1 11:33:44.500: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-t27fj exposes endpoints map[] (1.017615139s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-t27fj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-t27fj to expose endpoints map[pod1:[100]] May 1 11:33:48.601: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.095521615s elapsed, will retry) May 1 11:33:49.608: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-t27fj exposes endpoints map[pod1:[100]] (5.102544357s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-t27fj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-t27fj to expose endpoints map[pod1:[100] pod2:[101]] May 1 11:33:53.931: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-t27fj exposes endpoints map[pod1:[100] pod2:[101]] (4.318048275s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-t27fj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-t27fj to expose endpoints map[pod2:[101]] May 1 11:33:54.975: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-t27fj exposes endpoints map[pod2:[101]] (1.040219849s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-t27fj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-t27fj to expose endpoints map[] May 1 11:33:56.008: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-t27fj exposes endpoints map[] (1.029441352s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:33:56.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-t27fj" for this suite. May 1 11:34:18.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:34:18.162: INFO: namespace: e2e-tests-services-t27fj, resource: bindings, ignored listing per whitelist May 1 11:34:18.227: INFO: namespace e2e-tests-services-t27fj deletion completed in 22.089799786s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:34.878 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:34:18.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-b18acc12-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:34:18.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-d7z7q" to be "success or failure" May 1 11:34:18.347: INFO: Pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676323ms May 1 11:34:20.488: INFO: Pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145416228s May 1 11:34:22.491: INFO: Pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.149026153s May 1 11:34:24.496: INFO: Pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153564847s STEP: Saw pod success May 1 11:34:24.496: INFO: Pod "pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:34:24.499: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 11:34:24.526: INFO: Waiting for pod pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:34:24.583: INFO: Pod pod-projected-secrets-b190a9e8-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:34:24.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d7z7q" for this suite. May 1 11:34:30.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:34:30.619: INFO: namespace: e2e-tests-projected-d7z7q, resource: bindings, ignored listing per whitelist May 1 11:34:30.682: INFO: namespace e2e-tests-projected-d7z7q deletion completed in 6.094959032s • [SLOW TEST:12.455 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:34:30.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-b8fe1f65-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:34:30.802: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-ppk48" to be "success or failure" May 1 11:34:30.806: INFO: Pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531688ms May 1 11:34:32.810: INFO: Pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008455882s May 1 11:34:34.873: INFO: Pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.071199452s May 1 11:34:36.876: INFO: Pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074481512s STEP: Saw pod success May 1 11:34:36.876: INFO: Pod "pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:34:36.879: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 11:34:36.912: INFO: Waiting for pod pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:34:36.916: INFO: Pod pod-configmaps-b8fe8103-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:34:36.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ppk48" for this suite. May 1 11:34:42.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:34:42.966: INFO: namespace: e2e-tests-configmap-ppk48, resource: bindings, ignored listing per whitelist May 1 11:34:43.018: INFO: namespace e2e-tests-configmap-ppk48 deletion completed in 6.098750364s • [SLOW TEST:12.336 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:34:43.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c055d3ff-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:34:43.178: INFO: Waiting up to 5m0s for pod "pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-zsqnh" to be "success or failure" May 1 11:34:43.208: INFO: Pod "pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 29.023776ms May 1 11:34:45.213: INFO: Pod "pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034384781s May 1 11:34:47.236: INFO: Pod "pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057159329s STEP: Saw pod success May 1 11:34:47.236: INFO: Pod "pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:34:47.238: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 11:34:47.252: INFO: Waiting for pod pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:34:47.276: INFO: Pod pod-configmaps-c058b893-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:34:47.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zsqnh" for this suite. May 1 11:34:53.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:34:53.382: INFO: namespace: e2e-tests-configmap-zsqnh, resource: bindings, ignored listing per whitelist May 1 11:34:53.394: INFO: namespace e2e-tests-configmap-zsqnh deletion completed in 6.114447989s • [SLOW TEST:10.376 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:34:53.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 1 11:34:53.506: INFO: Waiting up to 5m0s for pod "var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-var-expansion-qlh7p" to be "success or failure" May 1 11:34:53.526: INFO: Pod "var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.830023ms May 1 11:34:55.531: INFO: Pod "var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024644182s May 1 11:34:57.534: INFO: Pod "var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028048502s STEP: Saw pod success May 1 11:34:57.534: INFO: Pod "var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:34:57.536: INFO: Trying to get logs from node hunter-worker pod var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 11:34:57.609: INFO: Waiting for pod var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:34:57.635: INFO: Pod var-expansion-c682b345-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:34:57.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-qlh7p" for this suite. May 1 11:35:03.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:03.772: INFO: namespace: e2e-tests-var-expansion-qlh7p, resource: bindings, ignored listing per whitelist May 1 11:35:03.821: INFO: namespace e2e-tests-var-expansion-qlh7p deletion completed in 6.182287956s • [SLOW TEST:10.426 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:03.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 11:35:04.031: INFO: Waiting up to 5m0s for pod "downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-r6c9r" to be "success or failure" May 1 11:35:04.041: INFO: Pod "downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.751256ms May 1 11:35:06.046: INFO: Pod "downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014442226s May 1 11:35:08.050: INFO: Pod "downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018912955s STEP: Saw pod success May 1 11:35:08.050: INFO: Pod "downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:35:08.053: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 11:35:08.104: INFO: Waiting for pod downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:35:08.108: INFO: Pod downward-api-ccca18ba-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:35:08.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-r6c9r" for this suite. May 1 11:35:14.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:14.180: INFO: namespace: e2e-tests-downward-api-r6c9r, resource: bindings, ignored listing per whitelist May 1 11:35:14.259: INFO: namespace e2e-tests-downward-api-r6c9r deletion completed in 6.147547278s • [SLOW TEST:10.438 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:14.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 11:35:14.458: INFO: Waiting up to 5m0s for pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-49q7p" to be "success or failure" May 1 11:35:14.462: INFO: Pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689107ms May 1 11:35:16.488: INFO: Pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029795457s May 1 11:35:18.493: INFO: Pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.034134116s May 1 11:35:20.497: INFO: Pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038736669s STEP: Saw pod success May 1 11:35:20.497: INFO: Pod "pod-d3016266-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:35:20.500: INFO: Trying to get logs from node hunter-worker pod pod-d3016266-8b9f-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:35:20.534: INFO: Waiting for pod pod-d3016266-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:35:20.590: INFO: Pod pod-d3016266-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:35:20.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-49q7p" for this suite. May 1 11:35:26.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:26.743: INFO: namespace: e2e-tests-emptydir-49q7p, resource: bindings, ignored listing per whitelist May 1 11:35:26.745: INFO: namespace e2e-tests-emptydir-49q7p deletion completed in 6.151304147s • [SLOW TEST:12.486 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:26.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-da6fb15e-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:35:26.987: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-4k7wq" to be "success or failure" May 1 11:35:26.993: INFO: Pod "pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.000333ms May 1 11:35:29.051: INFO: Pod "pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063924869s May 1 11:35:31.056: INFO: Pod "pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068379311s STEP: Saw pod success May 1 11:35:31.056: INFO: Pod "pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:35:31.059: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 11:35:31.238: INFO: Waiting for pod pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:35:31.404: INFO: Pod pod-projected-secrets-da7454eb-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:35:31.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4k7wq" for this suite. May 1 11:35:37.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:37.479: INFO: namespace: e2e-tests-projected-4k7wq, resource: bindings, ignored listing per whitelist May 1 11:35:37.495: INFO: namespace e2e-tests-projected-4k7wq deletion completed in 6.087666071s • [SLOW TEST:10.750 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:37.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:35:37.632: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.335531ms) May 1 11:35:37.636: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.665517ms) May 1 11:35:37.639: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.299672ms) May 1 11:35:37.643: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.628367ms) May 1 11:35:37.646: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.204676ms) May 1 11:35:37.649: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.487401ms) May 1 11:35:37.653: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.774053ms) May 1 11:35:37.657: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.503776ms) May 1 11:35:37.660: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.633802ms) May 1 11:35:37.664: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.216989ms) May 1 11:35:37.667: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.916425ms) May 1 11:35:37.670: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.08555ms) May 1 11:35:37.673: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.224456ms) May 1 11:35:37.676: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.235902ms) May 1 11:35:37.679: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.194072ms) May 1 11:35:37.683: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.498973ms) May 1 11:35:37.687: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.535259ms) May 1 11:35:37.690: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.650731ms) May 1 11:35:37.694: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.498033ms) May 1 11:35:37.698: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.8032ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:35:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-9hdts" for this suite. May 1 11:35:43.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:43.753: INFO: namespace: e2e-tests-proxy-9hdts, resource: bindings, ignored listing per whitelist May 1 11:35:43.794: INFO: namespace e2e-tests-proxy-9hdts deletion completed in 6.093269937s • [SLOW TEST:6.299 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:43.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e48fabe7-8b9f-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:35:43.926: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-74pnr" to be "success or failure" May 1 11:35:43.955: INFO: Pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 29.055083ms May 1 11:35:46.261: INFO: Pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334987764s May 1 11:35:48.265: INFO: Pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.338898087s May 1 11:35:50.269: INFO: Pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.343366233s STEP: Saw pod success May 1 11:35:50.269: INFO: Pod "pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:35:50.272: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 11:35:50.304: INFO: Waiting for pod pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017 to disappear May 1 11:35:50.317: INFO: Pod pod-projected-configmaps-e492cb44-8b9f-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:35:50.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-74pnr" for this suite. May 1 11:35:56.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:35:56.374: INFO: namespace: e2e-tests-projected-74pnr, resource: bindings, ignored listing per whitelist May 1 11:35:56.406: INFO: namespace e2e-tests-projected-74pnr deletion completed in 6.085701159s • [SLOW TEST:12.612 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:35:56.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 1 11:35:56.609: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:35:56.612: INFO: Number of nodes with available pods: 0 May 1 11:35:56.612: INFO: Node hunter-worker is running more than one daemon pod May 1 11:35:57.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:35:57.622: INFO: Number of nodes with available pods: 0 May 1 11:35:57.622: INFO: Node hunter-worker is running more than one daemon pod May 1 11:35:58.617: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:35:58.620: INFO: Number of nodes with available pods: 0 May 1 11:35:58.620: INFO: Node hunter-worker is running more than one daemon pod May 1 11:35:59.717: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:35:59.719: INFO: Number of nodes with available pods: 0 May 1 11:35:59.719: INFO: Node hunter-worker is running more than one daemon pod May 1 11:36:00.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:00.622: INFO: Number of nodes with available pods: 1 May 1 11:36:00.622: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:01.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:01.621: INFO: Number of nodes with available pods: 2 May 1 11:36:01.621: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 1 11:36:01.771: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:01.775: INFO: Number of nodes with available pods: 1 May 1 11:36:01.775: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:02.779: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:02.781: INFO: Number of nodes with available pods: 1 May 1 11:36:02.781: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:03.779: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:03.782: INFO: Number of nodes with available pods: 1 May 1 11:36:03.782: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:04.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:04.784: INFO: Number of nodes with available pods: 1 May 1 11:36:04.784: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:05.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:05.784: INFO: Number of nodes with available pods: 1 May 1 11:36:05.784: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:06.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:06.783: INFO: Number of nodes with available pods: 1 May 1 11:36:06.783: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:07.779: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:07.782: INFO: Number of nodes with available pods: 1 May 1 11:36:07.782: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:08.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:08.783: INFO: Number of nodes with available pods: 1 May 1 11:36:08.783: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:09.779: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:09.781: INFO: Number of nodes with available pods: 1 May 1 11:36:09.781: INFO: Node hunter-worker2 is running more than one daemon pod May 1 11:36:10.779: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 1 11:36:10.782: INFO: Number of nodes with available pods: 2 May 1 11:36:10.782: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vqxnd, will wait for the garbage collector to delete the pods May 1 11:36:10.842: INFO: Deleting DaemonSet.extensions daemon-set took: 6.454453ms May 1 11:36:11.043: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.288774ms May 1 11:36:21.755: INFO: Number of nodes with available pods: 0 May 1 11:36:21.755: INFO: Number of running nodes: 0, number of available pods: 0 May 1 11:36:21.758: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vqxnd/daemonsets","resourceVersion":"8158600"},"items":null} May 1 11:36:21.760: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vqxnd/pods","resourceVersion":"8158600"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:36:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vqxnd" for this suite. May 1 11:36:27.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:36:27.864: INFO: namespace: e2e-tests-daemonsets-vqxnd, resource: bindings, ignored listing per whitelist May 1 11:36:27.895: INFO: namespace e2e-tests-daemonsets-vqxnd deletion completed in 6.120187494s • [SLOW TEST:31.488 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:36:27.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 11:36:28.078: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:36:32.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-n7dq4" for this suite. May 1 11:37:14.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:37:14.218: INFO: namespace: e2e-tests-pods-n7dq4, resource: bindings, ignored listing per whitelist May 1 11:37:14.222: INFO: namespace e2e-tests-pods-n7dq4 deletion completed in 42.0923876s • [SLOW TEST:46.327 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:37:14.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 1 11:37:14.790: INFO: Waiting up to 5m0s for pod "client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017" in namespace "e2e-tests-containers-wc8nz" to be "success or failure" May 1 11:37:14.969: INFO: Pod "client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 178.986567ms May 1 11:37:16.973: INFO: Pod "client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182558347s May 1 11:37:18.976: INFO: Pod "client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185901967s STEP: Saw pod success May 1 11:37:18.976: INFO: Pod "client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:37:18.979: INFO: Trying to get logs from node hunter-worker2 pod client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:37:18.992: INFO: Waiting for pod client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017 to disappear May 1 11:37:19.004: INFO: Pod client-containers-1abd25eb-8ba0-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:37:19.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-wc8nz" for this suite. May 1 11:37:25.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:37:25.117: INFO: namespace: e2e-tests-containers-wc8nz, resource: bindings, ignored listing per whitelist May 1 11:37:25.143: INFO: namespace e2e-tests-containers-wc8nz deletion completed in 6.136021251s • [SLOW TEST:10.921 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:37:25.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:37:25.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-hdgv9" to be "success or failure" May 1 11:37:25.278: INFO: Pod "downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.935162ms May 1 11:37:27.370: INFO: Pod "downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113202193s May 1 11:37:29.374: INFO: Pod "downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117128254s STEP: Saw pod success May 1 11:37:29.374: INFO: Pod "downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:37:29.377: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 11:37:29.473: INFO: Waiting for pod downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017 to disappear May 1 11:37:29.489: INFO: Pod downwardapi-volume-20fa28e6-8ba0-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:37:29.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hdgv9" for this suite. May 1 11:37:35.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:37:35.572: INFO: namespace: e2e-tests-downward-api-hdgv9, resource: bindings, ignored listing per whitelist May 1 11:37:35.575: INFO: namespace e2e-tests-downward-api-hdgv9 deletion completed in 6.083325948s • [SLOW TEST:10.431 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:37:35.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0501 11:38:06.572521 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 11:38:06.572: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:38:06.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2wj87" for this suite. May 1 11:38:14.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:38:14.639: INFO: namespace: e2e-tests-gc-2wj87, resource: bindings, ignored listing per whitelist May 1 11:38:14.662: INFO: namespace e2e-tests-gc-2wj87 deletion completed in 8.08678528s • [SLOW TEST:39.087 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:38:14.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:38:18.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-fgl9q" for this suite. May 1 11:39:04.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:39:04.868: INFO: namespace: e2e-tests-kubelet-test-fgl9q, resource: bindings, ignored listing per whitelist May 1 11:39:04.874: INFO: namespace e2e-tests-kubelet-test-fgl9q deletion completed in 46.094722578s • [SLOW TEST:50.212 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:39:04.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5cb1206d-8ba0-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 11:39:05.544: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-lhstk" to be "success or failure" May 1 11:39:05.548: INFO: Pod "pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.769614ms May 1 11:39:07.581: INFO: Pod "pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036774054s May 1 11:39:09.586: INFO: Pod "pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041471617s STEP: Saw pod success May 1 11:39:09.586: INFO: Pod "pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:39:09.589: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 11:39:09.642: INFO: Waiting for pod pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017 to disappear May 1 11:39:09.653: INFO: Pod pod-projected-configmaps-5cbd340a-8ba0-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:39:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lhstk" for this suite. May 1 11:39:15.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:39:15.712: INFO: namespace: e2e-tests-projected-lhstk, resource: bindings, ignored listing per whitelist May 1 11:39:15.757: INFO: namespace e2e-tests-projected-lhstk deletion completed in 6.100063657s • [SLOW TEST:10.883 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:39:15.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 11:39:15.876: INFO: Waiting up to 5m0s for pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-qsgr6" to be "success or failure" May 1 11:39:15.900: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 24.147663ms May 1 11:39:17.903: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027111145s May 1 11:39:20.132: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256237064s May 1 11:39:22.136: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259764302s May 1 11:39:24.138: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 8.262241492s May 1 11:39:26.142: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.265654196s STEP: Saw pod success May 1 11:39:26.142: INFO: Pod "pod-62e930c3-8ba0-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:39:26.145: INFO: Trying to get logs from node hunter-worker pod pod-62e930c3-8ba0-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:39:26.211: INFO: Waiting for pod pod-62e930c3-8ba0-11ea-88a3-0242ac110017 to disappear May 1 11:39:26.224: INFO: Pod pod-62e930c3-8ba0-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:39:26.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qsgr6" for this suite. May 1 11:39:32.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:39:32.253: INFO: namespace: e2e-tests-emptydir-qsgr6, resource: bindings, ignored listing per whitelist May 1 11:39:32.324: INFO: namespace e2e-tests-emptydir-qsgr6 deletion completed in 6.096955358s • [SLOW TEST:16.567 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:39:32.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 11:39:32.493: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:40:00.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-mhxmp" for this suite. May 1 11:40:07.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:40:07.079: INFO: namespace: e2e-tests-init-container-mhxmp, resource: bindings, ignored listing per whitelist May 1 11:40:07.081: INFO: namespace e2e-tests-init-container-mhxmp deletion completed in 6.074200428s • [SLOW TEST:34.757 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:40:07.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 11:40:07.250: INFO: PodSpec: initContainers in spec.initContainers May 1 11:44:21.361: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-81893fce-8ba0-11ea-88a3-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-pz65m", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-pz65m/pods/pod-init-81893fce-8ba0-11ea-88a3-0242ac110017", UID:"8189f435-8ba0-11ea-99e8-0242ac110002", ResourceVersion:"8159713", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723930007, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"250535125"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pg5hd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a6a180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pg5hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pg5hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pg5hd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f10728), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008f5200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f107b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f107d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f107d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f107dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723930007, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723930007, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723930007, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723930007, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.142", StartTime:(*v1.Time)(0xc001dc8360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001dc88a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00238c150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://983d02a1e94bba2978e299a1ec262658ec518b41bf4ebe6604bc5ab1479ed978"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001dc8940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001dc8400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:44:21.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-pz65m" for this suite. May 1 11:44:47.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:44:47.606: INFO: namespace: e2e-tests-init-container-pz65m, resource: bindings, ignored listing per whitelist May 1 11:44:47.627: INFO: namespace e2e-tests-init-container-pz65m deletion completed in 26.22017764s • [SLOW TEST:280.545 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:44:47.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 1 11:44:48.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9szcz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9szcz/configmaps/e2e-watch-test-resource-version,UID:29107a8c-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159786,Generation:0,CreationTimestamp:2020-05-01 11:44:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 11:44:48.988: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9szcz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9szcz/configmaps/e2e-watch-test-resource-version,UID:29107a8c-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159787,Generation:0,CreationTimestamp:2020-05-01 11:44:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:44:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9szcz" for this suite. May 1 11:44:55.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:44:55.394: INFO: namespace: e2e-tests-watch-9szcz, resource: bindings, ignored listing per whitelist May 1 11:44:55.399: INFO: namespace e2e-tests-watch-9szcz deletion completed in 6.384734809s • [SLOW TEST:7.772 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:44:55.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 1 11:44:56.148: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159810,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 1 11:44:56.148: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159811,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 1 11:44:56.149: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159812,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 1 11:45:06.218: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159832,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 1 11:45:06.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159833,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 1 11:45:06.218: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9lmxf,SelfLink:/api/v1/namespaces/e2e-tests-watch-9lmxf/configmaps/e2e-watch-test-label-changed,UID:2d9c5caf-8ba1-11ea-99e8-0242ac110002,ResourceVersion:8159834,Generation:0,CreationTimestamp:2020-05-01 11:44:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:45:06.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9lmxf" for this suite. May 1 11:45:14.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:45:15.315: INFO: namespace: e2e-tests-watch-9lmxf, resource: bindings, ignored listing per whitelist May 1 11:45:15.325: INFO: namespace e2e-tests-watch-9lmxf deletion completed in 9.067477368s • [SLOW TEST:19.926 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:45:15.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-3caadf3d-8ba1-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:45:24.481: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-cf7hn" to be "success or failure" May 1 11:45:26.425: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1.944157534s May 1 11:45:31.281: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799950677s May 1 11:45:33.559: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.077305824s May 1 11:45:35.720: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.239161337s May 1 11:45:38.023: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.541838311s May 1 11:45:42.064: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.583216252s May 1 11:45:44.874: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.393000224s May 1 11:45:46.878: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.396837952s May 1 11:45:48.881: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 24.400077152s May 1 11:45:51.101: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.619590229s May 1 11:45:53.103: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.622164241s May 1 11:45:55.107: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 30.625418499s May 1 11:45:57.110: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.628333616s May 1 11:45:59.113: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.631589402s May 1 11:46:04.164: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 39.682740853s May 1 11:46:06.317: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 41.835791411s May 1 11:46:08.716: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.234551099s May 1 11:46:10.784: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 46.303135281s May 1 11:46:12.787: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.305947174s STEP: Saw pod success May 1 11:46:12.787: INFO: Pod "pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:46:12.790: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 11:46:12.857: INFO: Waiting for pod pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017 to disappear May 1 11:46:12.866: INFO: Pod pod-projected-secrets-3e079949-8ba1-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:46:12.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cf7hn" for this suite. May 1 11:46:20.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:46:20.930: INFO: namespace: e2e-tests-projected-cf7hn, resource: bindings, ignored listing per whitelist May 1 11:46:20.944: INFO: namespace e2e-tests-projected-cf7hn deletion completed in 8.075616986s • [SLOW TEST:65.619 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:46:20.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0501 11:46:42.271055 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 11:46:42.271: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:46:42.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rb4v2" for this suite. May 1 11:47:19.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:47:20.755: INFO: namespace: e2e-tests-gc-rb4v2, resource: bindings, ignored listing per whitelist May 1 11:47:20.792: INFO: namespace e2e-tests-gc-rb4v2 deletion completed in 38.518162527s • [SLOW TEST:59.847 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:47:20.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 11:48:47.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:47.978: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:49.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:50.002: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:51.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:52.026: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:53.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:53.982: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:55.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:56.501: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:57.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:48:58.746: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:48:59.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:00.812: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:01.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:01.981: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:03.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:03.981: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:05.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:06.235: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:07.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:07.982: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:09.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:09.990: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:11.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:11.982: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:13.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:13.981: INFO: Pod pod-with-prestop-exec-hook still exists May 1 11:49:15.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 11:49:15.981: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:49:15.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ps6r6" for this suite. May 1 11:50:44.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:50:44.077: INFO: namespace: e2e-tests-container-lifecycle-hook-ps6r6, resource: bindings, ignored listing per whitelist May 1 11:50:44.097: INFO: namespace e2e-tests-container-lifecycle-hook-ps6r6 deletion completed in 1m28.106416153s • [SLOW TEST:203.305 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:50:44.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 11:50:54.313: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fdd135a0-8ba1-11ea-88a3-0242ac110017" May 1 11:50:54.313: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fdd135a0-8ba1-11ea-88a3-0242ac110017" in namespace "e2e-tests-pods-mzg7p" to be "terminated due to deadline exceeded" May 1 11:50:54.353: INFO: Pod "pod-update-activedeadlineseconds-fdd135a0-8ba1-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 40.533038ms May 1 11:50:56.410: INFO: Pod "pod-update-activedeadlineseconds-fdd135a0-8ba1-11ea-88a3-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.097160897s May 1 11:50:56.410: INFO: Pod "pod-update-activedeadlineseconds-fdd135a0-8ba1-11ea-88a3-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:50:56.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mzg7p" for this suite. May 1 11:51:10.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:51:10.995: INFO: namespace: e2e-tests-pods-mzg7p, resource: bindings, ignored listing per whitelist May 1 11:51:11.016: INFO: namespace e2e-tests-pods-mzg7p deletion completed in 14.577161544s • [SLOW TEST:26.919 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:51:11.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 1 11:51:12.337: INFO: Waiting up to 5m0s for pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017" in namespace "e2e-tests-containers-7llgp" to be "success or failure" May 1 11:51:12.386: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 49.632972ms May 1 11:51:15.164: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827344407s May 1 11:51:17.169: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.83173077s May 1 11:51:19.862: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.525469945s May 1 11:51:21.869: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.532354846s May 1 11:51:23.872: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.535356152s May 1 11:51:27.832: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.495150252s May 1 11:51:29.835: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.498408753s May 1 11:51:31.839: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.50170864s May 1 11:51:34.244: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.907616498s May 1 11:51:38.652: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.315486841s May 1 11:51:40.655: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.318113241s May 1 11:51:42.658: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 30.32151077s May 1 11:51:44.665: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.328327991s May 1 11:51:46.772: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.435540627s May 1 11:51:50.709: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.372109403s May 1 11:51:52.783: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 40.446017252s May 1 11:51:54.836: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 42.499513311s May 1 11:51:56.884: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.547263227s May 1 11:51:59.262: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.925530272s May 1 11:52:01.544: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 49.207187864s May 1 11:52:03.546: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 51.209323615s May 1 11:52:05.549: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.212173542s May 1 11:52:07.552: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 55.215644209s May 1 11:52:10.472: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 58.134956096s May 1 11:52:12.475: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.138516404s May 1 11:52:15.103: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.766123846s May 1 11:52:17.106: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.76940109s May 1 11:52:19.496: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.159617643s May 1 11:52:21.500: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.162859215s STEP: Saw pod success May 1 11:52:21.500: INFO: Pod "client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:52:21.502: INFO: Trying to get logs from node hunter-worker2 pod client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 11:52:21.988: INFO: Waiting for pod client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017 to disappear May 1 11:52:22.790: INFO: Pod client-containers-0d8a8059-8ba2-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:52:22.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7llgp" for this suite. May 1 11:52:28.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:52:28.861: INFO: namespace: e2e-tests-containers-7llgp, resource: bindings, ignored listing per whitelist May 1 11:52:28.880: INFO: namespace e2e-tests-containers-7llgp deletion completed in 6.08411238s • [SLOW TEST:77.863 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:52:28.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 11:52:28.971: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 11:52:59.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2pw6g" for this suite. May 1 11:55:20.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:55:21.096: INFO: namespace: e2e-tests-init-container-2pw6g, resource: bindings, ignored listing per whitelist May 1 11:55:21.134: INFO: namespace e2e-tests-init-container-2pw6g deletion completed in 2m21.263464873s • [SLOW TEST:172.255 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:55:21.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 11:55:27.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-nbpm4" to be "success or failure" May 1 11:55:29.172: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1.325335744s May 1 11:55:31.187: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339849058s May 1 11:55:33.190: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.343243829s May 1 11:55:35.517: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.670028434s May 1 11:55:38.906: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.058497869s May 1 11:55:40.909: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.061834403s May 1 11:55:42.912: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.064999235s May 1 11:55:45.379: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.53222266s May 1 11:55:47.714: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.867270299s May 1 11:55:49.717: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.87033221s May 1 11:55:53.191: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.343441504s May 1 11:55:56.500: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.652680456s May 1 11:55:58.503: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 30.656099892s May 1 11:56:00.506: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.658856886s May 1 11:56:02.509: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.662186032s May 1 11:56:04.513: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.665850018s May 1 11:56:06.852: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 39.004955153s May 1 11:56:08.856: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 41.008935985s May 1 11:56:10.860: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.012692651s May 1 11:56:14.115: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.268251002s May 1 11:56:17.474: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 49.626791684s May 1 11:56:19.477: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 51.630009866s May 1 11:56:21.480: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.632605589s May 1 11:56:24.741: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 56.89392146s May 1 11:56:26.793: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 58.945667698s May 1 11:56:29.328: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.480939768s May 1 11:56:31.427: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.580079587s May 1 11:56:33.431: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.583426939s May 1 11:56:35.434: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.586779408s May 1 11:56:39.957: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.110311543s May 1 11:56:42.321: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.473377664s May 1 11:56:46.740: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.892694498s May 1 11:56:48.744: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.897021021s May 1 11:56:51.668: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.820545983s May 1 11:56:53.672: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.824389982s May 1 11:56:55.907: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.059918612s May 1 11:56:58.055: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.207727433s May 1 11:57:00.060: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.213159993s May 1 11:57:02.788: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.940397742s May 1 11:57:05.112: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.265236397s May 1 11:57:07.506: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.658961082s May 1 11:57:11.321: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.47381155s May 1 11:57:13.324: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.477135484s May 1 11:57:15.328: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.480556397s May 1 11:57:17.333: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.485934033s May 1 11:57:19.997: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.150323318s May 1 11:57:22.238: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.391039572s May 1 11:57:24.241: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.393843544s May 1 11:57:26.413: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.566014183s May 1 11:57:28.417: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.569866963s May 1 11:57:30.605: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.75738879s May 1 11:57:34.752: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.904697062s May 1 11:57:36.879: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.032037883s May 1 11:57:38.883: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.035478092s May 1 11:57:40.892: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.044584137s May 1 11:57:42.895: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.048279015s May 1 11:57:44.898: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.051208395s May 1 11:57:48.068: INFO: Pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017": Phase="Failed", Reason="", readiness=false. Elapsed: 2m20.220969417s May 1 11:57:51.684: INFO: Output of node "hunter-worker2" pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" container "client-container": failed to open log file "/var/log/pods/a59ab152-8ba2-11ea-99e8-0242ac110002/client-container/0.log": open /var/log/pods/a59ab152-8ba2-11ea-99e8-0242ac110002/client-container/0.log: no such file or directory STEP: delete the pod May 1 11:57:54.595: INFO: Waiting for pod downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017 to disappear May 1 11:57:55.999: INFO: Pod downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017 no longer exists May 1 11:57:55.999: INFO: Unexpected error occurred: expected pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" success: pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.4 PodIP:10.244.2.152 StartTime:2020-05-01 11:55:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0": name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0" is reserved for "fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617}] QOSClass:Burstable} [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-projected-nbpm4". STEP: Found 4 events. May 1 11:57:56.004: INFO: At 2020-05-01 11:55:28 +0000 UTC - event for downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017: {default-scheduler } Scheduled: Successfully assigned e2e-tests-projected-nbpm4/downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017 to hunter-worker2 May 1 11:57:56.004: INFO: At 2020-05-01 11:55:31 +0000 UTC - event for downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017: {kubelet hunter-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine May 1 11:57:56.004: INFO: At 2020-05-01 11:57:31 +0000 UTC - event for downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017: {kubelet hunter-worker2} Failed: Error: context deadline exceeded May 1 11:57:56.004: INFO: At 2020-05-01 11:57:33 +0000 UTC - event for downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017: {kubelet hunter-worker2} Failed: Error: failed to reserve container name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0": name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0" is reserved for "fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617" May 1 11:57:56.010: INFO: POD NODE PHASE GRACE CONDITIONS May 1 11:57:56.010: INFO: coredns-54ff9cd656-4h7lb hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 1 11:57:56.010: INFO: coredns-54ff9cd656-8vrkk hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 1 11:57:56.010: INFO: etcd-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 1 11:57:56.010: INFO: kindnet-54h7m hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 1 11:57:56.010: INFO: kindnet-l2xm6 hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 1 11:57:56.010: INFO: kindnet-mtqrs hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 1 11:57:56.010: INFO: kube-apiserver-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 1 11:57:56.010: INFO: kube-controller-manager-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 1 11:57:56.010: INFO: kube-proxy-mmppc hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 1 11:57:56.010: INFO: kube-proxy-s52ll hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 1 11:57:56.010: INFO: kube-proxy-szbng hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 1 11:57:56.010: INFO: kube-scheduler-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 1 11:57:56.010: INFO: local-path-provisioner-77cfdd744c-q47vg hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 09:07:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 09:07:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC }] May 1 11:57:56.011: INFO: May 1 11:57:56.012: INFO: Logging node info for node hunter-control-plane May 1 11:57:56.014: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-control-plane,UID:faa448b1-66e9-11ea-99e8-0242ac110002,ResourceVersion:8161299,Generation:0,CreationTimestamp:2020-03-15 18:22:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-control-plane,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-01 11:57:48 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-01 11:57:48 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-01 11:57:48 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-01 11:57:48 +0000 UTC 2020-03-15 18:23:41 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.2} {Hostname hunter-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3c4716968dac483293a23c2100ad64a5,SystemUUID:683417f7-64ca-431d-b8ac-22e73b26255e,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 1 11:57:56.014: INFO: Logging kubelet events for node hunter-control-plane May 1 11:57:56.016: INFO: Logging pods the kubelet thinks is on node hunter-control-plane May 1 11:57:56.114: INFO: kube-apiserver-hunter-control-plane started at (0+0 container statuses recorded) May 1 11:57:56.114: INFO: kube-controller-manager-hunter-control-plane started at (0+0 container statuses recorded) May 1 11:57:56.114: INFO: kube-scheduler-hunter-control-plane started at (0+0 container statuses recorded) May 1 11:57:56.114: INFO: etcd-hunter-control-plane started at (0+0 container statuses recorded) May 1 11:57:56.115: INFO: kube-proxy-mmppc started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.115: INFO: Container kube-proxy ready: true, restart count 0 May 1 11:57:56.115: INFO: kindnet-l2xm6 started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.115: INFO: Container kindnet-cni ready: true, restart count 0 May 1 11:57:56.115: INFO: local-path-provisioner-77cfdd744c-q47vg started at 2020-03-15 18:23:41 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.115: INFO: Container local-path-provisioner ready: true, restart count 4 W0501 11:57:56.118800 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 11:57:56.486: INFO: Latency metrics for node hunter-control-plane May 1 11:57:56.486: INFO: Logging node info for node hunter-worker May 1 11:57:56.489: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker,UID:06f62848-66ea-11ea-99e8-0242ac110002,ResourceVersion:8161296,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-01 11:57:46 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-01 11:57:46 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-01 11:57:46 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-01 11:57:46 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.3} {Hostname hunter-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1ba315df6f584c2d8a0cf4ead2df3551,SystemUUID:64c934e2-ea4e-48d7-92ee-50d04109360b,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx:latest] 51030102} {[docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f docker.io/library/nginx@sha256:f1a695380f06cf363bf45fa85774cfcb5e60fe1557504715ff96a1933d6cbf51 docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 1 11:57:56.490: INFO: Logging kubelet events for node hunter-worker May 1 11:57:56.492: INFO: Logging pods the kubelet thinks is on node hunter-worker May 1 11:57:56.499: INFO: coredns-54ff9cd656-4h7lb started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.499: INFO: Container coredns ready: true, restart count 0 May 1 11:57:56.499: INFO: kube-proxy-szbng started at 2020-03-15 18:23:11 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.499: INFO: Container kube-proxy ready: true, restart count 0 May 1 11:57:56.499: INFO: kindnet-54h7m started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.499: INFO: Container kindnet-cni ready: true, restart count 0 W0501 11:57:56.501425 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 11:57:56.532: INFO: Latency metrics for node hunter-worker May 1 11:57:56.532: INFO: Logging node info for node hunter-worker2 May 1 11:57:56.559: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker2,UID:073ca987-66ea-11ea-99e8-0242ac110002,ResourceVersion:8161302,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker2,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-01 11:57:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-01 11:57:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-01 11:57:50 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-01 11:57:50 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.4} {Hostname hunter-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dde8970cf1ce42c0bbb19e593c484fda,SystemUUID:9c4b9179-843d-4e50-859c-2ca9335431a5,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx:latest] 51030102} {[docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28 docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 1 11:57:56.559: INFO: Logging kubelet events for node hunter-worker2 May 1 11:57:56.562: INFO: Logging pods the kubelet thinks is on node hunter-worker2 May 1 11:57:56.566: INFO: kindnet-mtqrs started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.566: INFO: Container kindnet-cni ready: true, restart count 0 May 1 11:57:56.566: INFO: coredns-54ff9cd656-8vrkk started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.566: INFO: Container coredns ready: true, restart count 0 May 1 11:57:56.566: INFO: kube-proxy-s52ll started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 1 11:57:56.566: INFO: Container kube-proxy ready: true, restart count 0 W0501 11:57:56.568950 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 11:57:56.616: INFO: Latency metrics for node hunter-worker2 May 1 11:57:56.616: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.893929s} May 1 11:57:56.616: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m3.893929s} May 1 11:57:56.616: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m3.893929s} May 1 11:57:56.616: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m1.21298s} May 1 11:57:56.616: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m1.21298s} May 1 11:57:56.616: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m1.21298s} May 1 11:57:56.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nbpm4" for this suite. May 1 11:58:15.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 11:58:16.401: INFO: namespace: e2e-tests-projected-nbpm4, resource: bindings, ignored listing per whitelist May 1 11:58:16.416: INFO: namespace e2e-tests-projected-nbpm4 deletion completed in 19.742473254s • Failure [175.282 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Expected error: <*errors.errorString | 0xc001708b00>: { s: "expected pod \"downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017\" success: pod \"downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.4 PodIP:10.244.2.152 StartTime:2020-05-01 11:55:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name \"client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0\": name \"client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0\" is reserved for \"fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617\",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617}] QOSClass:Burstable}", } expected pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" success: pod "downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 11:55:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.4 PodIP:10.244.2.152 StartTime:2020-05-01 11:55:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:&ContainerStateWaiting{Reason:CreateContainerError,Message:failed to reserve container name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0": name "client-container_downwardapi-volume-a40fa4e9-8ba2-11ea-88a3-0242ac110017_e2e-tests-projected-nbpm4_a59ab152-8ba2-11ea-99e8-0242ac110002_0" is reserved for "fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617",} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617,}} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:containerd://fde9aa17ca8d42ea2b0f22085556d4224d1f787c22250c7244378b8c71e4b617}] QOSClass:Burstable} not to have occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 11:58:16.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0b522368-8ba3-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 11:58:17.634: INFO: Waiting up to 5m0s for pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-nnjr6" to be "success or failure" May 1 11:58:17.885: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 250.556946ms May 1 11:58:20.812: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17775808s May 1 11:58:22.850: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.215963202s May 1 11:58:25.010: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.375723262s May 1 11:58:27.310: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.675930292s May 1 11:58:29.750: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.115416312s May 1 11:58:37.085: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.451192041s May 1 11:58:39.089: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.454489836s May 1 11:58:41.092: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 23.457544362s May 1 11:58:43.095: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.46041224s May 1 11:58:46.210: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.575627776s May 1 11:58:48.576: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 30.942216731s May 1 11:58:51.490: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 33.855855497s May 1 11:58:53.493: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 35.858834694s May 1 11:58:56.222: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.587889416s May 1 11:58:58.225: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 40.590920733s May 1 11:59:01.579: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 43.944745757s May 1 11:59:03.582: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 45.948049456s May 1 11:59:05.586: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 47.951615473s May 1 11:59:09.299: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 51.664983792s May 1 11:59:11.416: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 53.781865217s May 1 11:59:13.420: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 55.785696388s May 1 11:59:19.419: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.784794212s May 1 11:59:22.275: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.640566308s May 1 11:59:24.282: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.647942067s May 1 11:59:27.131: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.496899544s May 1 11:59:29.135: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.500419816s May 1 11:59:32.523: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.888468155s May 1 11:59:34.528: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.893408565s May 1 11:59:38.443: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.808403741s May 1 11:59:41.423: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.789153991s May 1 11:59:43.426: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m25.791407446s May 1 11:59:45.429: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m27.795163133s May 1 11:59:48.007: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.372314851s May 1 11:59:50.782: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m33.14780778s May 1 11:59:53.149: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m35.51486405s May 1 11:59:57.336: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 1m39.701672197s May 1 11:59:59.339: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m41.704653019s STEP: Saw pod success May 1 11:59:59.339: INFO: Pod "pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 11:59:59.342: INFO: Trying to get logs from node hunter-worker pod pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 12:00:03.022: INFO: Waiting for pod pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017 to disappear May 1 12:00:03.051: INFO: Pod pod-secrets-0b6c5073-8ba3-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:00:03.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nnjr6" for this suite. May 1 12:00:18.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:00:19.731: INFO: namespace: e2e-tests-secrets-nnjr6, resource: bindings, ignored listing per whitelist May 1 12:00:19.740: INFO: namespace e2e-tests-secrets-nnjr6 deletion completed in 16.686103516s • [SLOW TEST:123.323 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:00:19.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-gm5sq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-gm5sq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gm5sq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.188.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.188.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.188.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.188.251_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-gm5sq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-gm5sq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-gm5sq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gm5sq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.188.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.188.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.188.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.188.251_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 12:02:14.222: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:14.728: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.807: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.949: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.950: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.952: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.954: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.956: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.957: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.959: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:15.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:16.052: INFO: Lookups using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-gm5sq jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc] May 1 12:02:21.056: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.059: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.073: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.090: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.093: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.095: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.098: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.100: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.102: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.105: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.108: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:21.122: INFO: Lookups using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-gm5sq jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc] May 1 12:02:26.056: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.060: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.075: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.103: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.105: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.108: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.111: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.113: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.115: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.118: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:26.130: INFO: Lookups using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-gm5sq jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc] May 1 12:02:31.056: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.060: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.130: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.132: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.134: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.136: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.139: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.141: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.145: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:31.157: INFO: Lookups using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-gm5sq jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq jessie_udp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@dns-test-service.e2e-tests-dns-gm5sq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc] May 1 12:02:37.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:37.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc from pod e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017: the server could not find the requested resource (get pods dns-test-57709917-8ba3-11ea-88a3-0242ac110017) May 1 12:02:37.039: INFO: Lookups using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 failed for: [jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-gm5sq.svc] May 1 12:02:41.219: INFO: DNS probes using e2e-tests-dns-gm5sq/dns-test-57709917-8ba3-11ea-88a3-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:02:43.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-gm5sq" for this suite. May 1 12:03:03.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:03:03.479: INFO: namespace: e2e-tests-dns-gm5sq, resource: bindings, ignored listing per whitelist May 1 12:03:03.520: INFO: namespace e2e-tests-dns-gm5sq deletion completed in 20.183470391s • [SLOW TEST:163.780 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:03:03.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 1 12:03:06.992: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:03:16.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9drrp" for this suite. May 1 12:03:26.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:03:27.115: INFO: namespace: e2e-tests-init-container-9drrp, resource: bindings, ignored listing per whitelist May 1 12:03:27.124: INFO: namespace e2e-tests-init-container-9drrp deletion completed in 9.599737097s • [SLOW TEST:23.603 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:03:27.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zq5mq in namespace e2e-tests-proxy-wm6w2 I0501 12:03:28.295945 6 runners.go:184] Created replication controller with name: proxy-service-zq5mq, namespace: e2e-tests-proxy-wm6w2, replica count: 1 I0501 12:03:29.346302 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:03:30.346478 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:03:31.346709 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:03:32.346914 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:03:33.347108 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:03:34.347342 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:35.347502 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:36.347669 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:37.347876 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:38.348038 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:39.348235 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:40.348424 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:41.348599 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:42.348787 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 12:03:43.348932 6 runners.go:184] proxy-service-zq5mq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 12:03:43.351: INFO: setup took 15.43945929s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 1 12:03:43.358: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wm6w2/pods/proxy-service-zq5mq-s57xj:160/proxy/: foo (200; 6.110561ms) May 1 12:03:43.359: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wm6w2/pods/proxy-service-zq5mq-s57xj/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 1 12:04:27.982: INFO: Pod name pod-release: Found 0 pods out of 1 May 1 12:04:32.988: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:04:34.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pjzqn" for this suite. May 1 12:04:40.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:04:40.953: INFO: namespace: e2e-tests-replication-controller-pjzqn, resource: bindings, ignored listing per whitelist May 1 12:04:41.222: INFO: namespace e2e-tests-replication-controller-pjzqn deletion completed in 6.879234827s • [SLOW TEST:13.363 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:04:41.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0501 12:04:52.307177 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 12:04:52.307: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:04:52.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4tlnb" for this suite. May 1 12:05:00.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:05:00.771: INFO: namespace: e2e-tests-gc-4tlnb, resource: bindings, ignored listing per whitelist May 1 12:05:00.806: INFO: namespace e2e-tests-gc-4tlnb deletion completed in 8.496208152s • [SLOW TEST:19.584 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:05:00.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-qk5q4/configmap-test-fbd449ac-8ba3-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:05:00.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-qk5q4" to be "success or failure" May 1 12:05:00.985: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580979ms May 1 12:05:02.988: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0067037s May 1 12:05:04.991: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009816807s May 1 12:05:07.057: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.07604339s May 1 12:05:09.060: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078845413s STEP: Saw pod success May 1 12:05:09.060: INFO: Pod "pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:05:09.062: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017 container env-test: STEP: delete the pod May 1 12:05:09.261: INFO: Waiting for pod pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017 to disappear May 1 12:05:09.266: INFO: Pod pod-configmaps-fbd502aa-8ba3-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:05:09.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qk5q4" for this suite. May 1 12:05:15.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:05:15.325: INFO: namespace: e2e-tests-configmap-qk5q4, resource: bindings, ignored listing per whitelist May 1 12:05:15.336: INFO: namespace e2e-tests-configmap-qk5q4 deletion completed in 6.067606596s • [SLOW TEST:14.530 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:05:15.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 12:05:15.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wnvdt' May 1 12:05:20.166: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 12:05:20.166: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 1 12:05:20.170: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cptpx] May 1 12:05:20.171: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cptpx" in namespace "e2e-tests-kubectl-wnvdt" to be "running and ready" May 1 12:05:20.177: INFO: Pod "e2e-test-nginx-rc-cptpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336612ms May 1 12:05:22.181: INFO: Pod "e2e-test-nginx-rc-cptpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010676795s May 1 12:05:24.185: INFO: Pod "e2e-test-nginx-rc-cptpx": Phase="Running", Reason="", readiness=true. Elapsed: 4.014842312s May 1 12:05:24.185: INFO: Pod "e2e-test-nginx-rc-cptpx" satisfied condition "running and ready" May 1 12:05:24.185: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cptpx] May 1 12:05:24.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wnvdt' May 1 12:05:24.382: INFO: stderr: "" May 1 12:05:24.382: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 1 12:05:24.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wnvdt' May 1 12:05:24.863: INFO: stderr: "" May 1 12:05:24.863: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:05:24.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wnvdt" for this suite. May 1 12:05:32.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:05:32.932: INFO: namespace: e2e-tests-kubectl-wnvdt, resource: bindings, ignored listing per whitelist May 1 12:05:33.060: INFO: namespace e2e-tests-kubectl-wnvdt deletion completed in 8.187923734s • [SLOW TEST:17.725 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:05:33.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:05:33.680: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:05:38.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vdkkd" for this suite. May 1 12:06:24.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:06:24.098: INFO: namespace: e2e-tests-pods-vdkkd, resource: bindings, ignored listing per whitelist May 1 12:06:24.152: INFO: namespace e2e-tests-pods-vdkkd deletion completed in 46.093246773s • [SLOW TEST:51.092 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:06:24.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-2d806490-8ba4-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:06:24.270: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-4x2c7" to be "success or failure" May 1 12:06:24.296: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.723161ms May 1 12:06:26.298: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028557742s May 1 12:06:28.576: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305994597s May 1 12:06:30.843: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573376642s May 1 12:06:32.847: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.577201118s STEP: Saw pod success May 1 12:06:32.847: INFO: Pod "pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:06:32.850: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 12:06:32.942: INFO: Waiting for pod pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017 to disappear May 1 12:06:32.993: INFO: Pod pod-projected-configmaps-2d82387b-8ba4-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:06:32.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4x2c7" for this suite. May 1 12:06:39.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:06:39.154: INFO: namespace: e2e-tests-projected-4x2c7, resource: bindings, ignored listing per whitelist May 1 12:06:39.216: INFO: namespace e2e-tests-projected-4x2c7 deletion completed in 6.220126419s • [SLOW TEST:15.064 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:06:39.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:06:39.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 1 12:06:39.561: INFO: stderr: "" May 1 12:06:39.561: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:25:50Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:06:39.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-65jhd" for this suite. May 1 12:06:46.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:06:46.576: INFO: namespace: e2e-tests-kubectl-65jhd, resource: bindings, ignored listing per whitelist May 1 12:06:46.585: INFO: namespace e2e-tests-kubectl-65jhd deletion completed in 7.021049678s • [SLOW TEST:7.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:06:46.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-3b82dc3f-8ba4-11ea-88a3-0242ac110017 STEP: Creating secret with name s-test-opt-upd-3b82dca2-8ba4-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3b82dc3f-8ba4-11ea-88a3-0242ac110017 STEP: Updating secret s-test-opt-upd-3b82dca2-8ba4-11ea-88a3-0242ac110017 STEP: Creating secret with name s-test-opt-create-3b82dcbd-8ba4-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:08:26.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zr95l" for this suite. May 1 12:08:50.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:08:50.128: INFO: namespace: e2e-tests-projected-zr95l, resource: bindings, ignored listing per whitelist May 1 12:08:50.176: INFO: namespace e2e-tests-projected-zr95l deletion completed in 24.07033562s • [SLOW TEST:123.590 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:08:50.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-84868f86-8ba4-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:08:50.266: INFO: Waiting up to 5m0s for pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-b2mvd" to be "success or failure" May 1 12:08:50.312: INFO: Pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.715577ms May 1 12:08:52.317: INFO: Pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051052084s May 1 12:08:54.373: INFO: Pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.106851701s May 1 12:08:56.377: INFO: Pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111384379s STEP: Saw pod success May 1 12:08:56.377: INFO: Pod "pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:08:56.380: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 12:08:56.392: INFO: Waiting for pod pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017 to disappear May 1 12:08:56.397: INFO: Pod pod-secrets-84875273-8ba4-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:08:56.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b2mvd" for this suite. May 1 12:09:02.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:09:02.440: INFO: namespace: e2e-tests-secrets-b2mvd, resource: bindings, ignored listing per whitelist May 1 12:09:02.475: INFO: namespace e2e-tests-secrets-b2mvd deletion completed in 6.076064743s • [SLOW TEST:12.299 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:09:02.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:09:02.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-sklhh" for this suite. May 1 12:09:24.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:09:24.693: INFO: namespace: e2e-tests-kubelet-test-sklhh, resource: bindings, ignored listing per whitelist May 1 12:09:24.775: INFO: namespace e2e-tests-kubelet-test-sklhh deletion completed in 22.148955761s • [SLOW TEST:22.299 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:09:24.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:09:24.880: INFO: Creating deployment "test-recreate-deployment" May 1 12:09:24.897: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 1 12:09:24.902: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 1 12:09:27.015: INFO: Waiting deployment "test-recreate-deployment" to complete May 1 12:09:27.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:09:29.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:09:31.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:09:33.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:09:35.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931764, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:09:37.021: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 1 12:09:37.027: INFO: Updating deployment test-recreate-deployment May 1 12:09:37.027: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 12:09:37.859: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-69xms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-69xms/deployments/test-recreate-deployment,UID:992a81d9-8ba4-11ea-99e8-0242ac110002,ResourceVersion:8162968,Generation:2,CreationTimestamp:2020-05-01 12:09:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-01 12:09:37 +0000 UTC 2020-05-01 12:09:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-01 12:09:37 +0000 UTC 2020-05-01 12:09:24 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 1 12:09:37.863: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-69xms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-69xms/replicasets/test-recreate-deployment-589c4bfd,UID:a0779c7b-8ba4-11ea-99e8-0242ac110002,ResourceVersion:8162966,Generation:1,CreationTimestamp:2020-05-01 12:09:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 992a81d9-8ba4-11ea-99e8-0242ac110002 0xc000c7f71f 0xc000c7f730}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 12:09:37.863: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 1 12:09:37.863: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-69xms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-69xms/replicasets/test-recreate-deployment-5bf7f65dc,UID:992df934-8ba4-11ea-99e8-0242ac110002,ResourceVersion:8162956,Generation:2,CreationTimestamp:2020-05-01 12:09:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 992a81d9-8ba4-11ea-99e8-0242ac110002 0xc000c7fb70 0xc000c7fb71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 12:09:37.945: INFO: Pod "test-recreate-deployment-589c4bfd-c8rv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-c8rv5,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-69xms,SelfLink:/api/v1/namespaces/e2e-tests-deployment-69xms/pods/test-recreate-deployment-589c4bfd-c8rv5,UID:a07ad224-8ba4-11ea-99e8-0242ac110002,ResourceVersion:8162969,Generation:0,CreationTimestamp:2020-05-01 12:09:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a0779c7b-8ba4-11ea-99e8-0242ac110002 0xc00177aa9f 0xc00177aab0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-psl86 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-psl86,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-psl86 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00177ac40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00177ac60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:09:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:09:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:09:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-01 12:09:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:09:37.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-69xms" for this suite. May 1 12:09:44.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:09:44.023: INFO: namespace: e2e-tests-deployment-69xms, resource: bindings, ignored listing per whitelist May 1 12:09:44.150: INFO: namespace e2e-tests-deployment-69xms deletion completed in 6.201928756s • [SLOW TEST:19.375 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:09:44.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 12:09:44.235: INFO: Waiting up to 5m0s for pod "pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-jlcfh" to be "success or failure" May 1 12:09:44.271: INFO: Pod "pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.242982ms May 1 12:09:46.283: INFO: Pod "pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048029243s May 1 12:09:48.287: INFO: Pod "pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052003001s STEP: Saw pod success May 1 12:09:48.287: INFO: Pod "pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:09:48.290: INFO: Trying to get logs from node hunter-worker2 pod pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:09:48.482: INFO: Waiting for pod pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017 to disappear May 1 12:09:48.534: INFO: Pod pod-a4b02a5a-8ba4-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:09:48.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jlcfh" for this suite. May 1 12:09:54.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:09:54.623: INFO: namespace: e2e-tests-emptydir-jlcfh, resource: bindings, ignored listing per whitelist May 1 12:09:54.675: INFO: namespace e2e-tests-emptydir-jlcfh deletion completed in 6.137507311s • [SLOW TEST:10.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:09:54.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-l5bj STEP: Creating a pod to test atomic-volume-subpath May 1 12:09:54.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l5bj" in namespace "e2e-tests-subpath-c8596" to be "success or failure" May 1 12:09:54.945: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 56.258026ms May 1 12:09:56.948: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059192766s May 1 12:09:59.409: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519932937s May 1 12:10:01.786: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.897052474s May 1 12:10:03.789: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.899874618s May 1 12:10:05.792: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.902956682s May 1 12:10:08.079: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.189885239s May 1 12:10:10.082: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.192578792s May 1 12:10:12.084: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.195349038s May 1 12:10:14.087: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 19.197888564s May 1 12:10:16.193: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 21.304161708s May 1 12:10:18.644: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.755333249s May 1 12:10:21.182: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=true. Elapsed: 26.293497271s May 1 12:10:23.185: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 28.296244909s May 1 12:10:25.188: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 30.299419653s May 1 12:10:27.191: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 32.302145805s May 1 12:10:29.194: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 34.305172596s May 1 12:10:31.197: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 36.308084581s May 1 12:10:33.201: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 38.31211131s May 1 12:10:35.204: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 40.315105696s May 1 12:10:37.207: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 42.318057677s May 1 12:10:41.361: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 46.472087201s May 1 12:10:43.730: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 48.841473934s May 1 12:10:45.734: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Running", Reason="", readiness=false. Elapsed: 50.844637989s May 1 12:10:47.736: INFO: Pod "pod-subpath-test-configmap-l5bj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 52.84730144s STEP: Saw pod success May 1 12:10:47.736: INFO: Pod "pod-subpath-test-configmap-l5bj" satisfied condition "success or failure" May 1 12:10:47.738: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-l5bj container test-container-subpath-configmap-l5bj: STEP: delete the pod May 1 12:10:48.489: INFO: Waiting for pod pod-subpath-test-configmap-l5bj to disappear May 1 12:10:48.553: INFO: Pod pod-subpath-test-configmap-l5bj no longer exists STEP: Deleting pod pod-subpath-test-configmap-l5bj May 1 12:10:48.554: INFO: Deleting pod "pod-subpath-test-configmap-l5bj" in namespace "e2e-tests-subpath-c8596" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:10:48.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c8596" for this suite. May 1 12:10:54.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:10:54.732: INFO: namespace: e2e-tests-subpath-c8596, resource: bindings, ignored listing per whitelist May 1 12:10:54.736: INFO: namespace e2e-tests-subpath-c8596 deletion completed in 6.17183965s • [SLOW TEST:60.061 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:10:54.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:11:37.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-dfpdp" for this suite. May 1 12:11:47.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:11:47.572: INFO: namespace: e2e-tests-container-runtime-dfpdp, resource: bindings, ignored listing per whitelist May 1 12:11:47.648: INFO: namespace e2e-tests-container-runtime-dfpdp deletion completed in 10.101873113s • [SLOW TEST:52.912 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:11:47.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ee54de26-8ba4-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:11:47.787: INFO: Waiting up to 5m0s for pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-7ql7j" to be "success or failure" May 1 12:11:47.790: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.041846ms May 1 12:11:49.884: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096614085s May 1 12:11:53.298: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.510661791s May 1 12:11:55.302: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.514197706s May 1 12:11:57.306: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 9.518778021s May 1 12:11:59.310: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.522464352s STEP: Saw pod success May 1 12:11:59.310: INFO: Pod "pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:11:59.313: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 12:11:59.368: INFO: Waiting for pod pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017 to disappear May 1 12:11:59.378: INFO: Pod pod-secrets-ee56f5db-8ba4-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:11:59.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7ql7j" for this suite. May 1 12:12:05.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:12:05.506: INFO: namespace: e2e-tests-secrets-7ql7j, resource: bindings, ignored listing per whitelist May 1 12:12:05.509: INFO: namespace e2e-tests-secrets-7ql7j deletion completed in 6.127795492s • [SLOW TEST:17.860 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:12:05.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 12:12:12.190: INFO: Successfully updated pod "pod-update-f8fd56d1-8ba4-11ea-88a3-0242ac110017" STEP: verifying the updated pod is in kubernetes May 1 12:12:12.199: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:12:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dqn6l" for this suite. May 1 12:12:36.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:12:36.220: INFO: namespace: e2e-tests-pods-dqn6l, resource: bindings, ignored listing per whitelist May 1 12:12:36.276: INFO: namespace e2e-tests-pods-dqn6l deletion completed in 24.073752936s • [SLOW TEST:30.767 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:12:36.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 1 12:12:42.544: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0b5cab9a-8ba5-11ea-88a3-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-p7ltx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-p7ltx/pods/pod-submit-remove-0b5cab9a-8ba5-11ea-88a3-0242ac110017", UID:"0b5dd59b-8ba5-11ea-99e8-0242ac110002", ResourceVersion:"8163541", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723931956, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"470884880"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cnbwm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0021d5340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cnbwm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cdb6a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00193b8c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cdb6f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cdb710)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001cdb718), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cdb71c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931956, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931960, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931960, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723931956, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.167", StartTime:(*v1.Time)(0xc0015e8c60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0015e8ca0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://1e50b989f5a3ca3a824c0f5f33127bcf52d1e15ea599fd419e46db14b234c942"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 1 12:12:47.556: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:12:47.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-p7ltx" for this suite. May 1 12:12:53.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:12:53.871: INFO: namespace: e2e-tests-pods-p7ltx, resource: bindings, ignored listing per whitelist May 1 12:12:53.872: INFO: namespace e2e-tests-pods-p7ltx deletion completed in 6.310253136s • [SLOW TEST:17.596 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:12:53.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 1 12:13:00.068: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-15ce877c-8ba5-11ea-88a3-0242ac110017,GenerateName:,Namespace:e2e-tests-events-t7lzs,SelfLink:/api/v1/namespaces/e2e-tests-events-t7lzs/pods/send-events-15ce877c-8ba5-11ea-88a3-0242ac110017,UID:15d03293-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8163601,Generation:0,CreationTimestamp:2020-05-01 12:12:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 994284579,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mf5mz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mf5mz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mf5mz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb59d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb59f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:12:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:12:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:12:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:12:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.168,StartTime:2020-05-01 12:12:54 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-01 12:12:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://617a496d4a5d3fd774c8e9bf2ee1b9d04fea0534e5991b1db406443a729c0f0b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 1 12:13:02.073: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 1 12:13:04.125: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:13:04.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-t7lzs" for this suite. May 1 12:13:48.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:13:48.521: INFO: namespace: e2e-tests-events-t7lzs, resource: bindings, ignored listing per whitelist May 1 12:13:48.585: INFO: namespace e2e-tests-events-t7lzs deletion completed in 44.122725619s • [SLOW TEST:54.713 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:13:48.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-sf8sl/secret-test-36672aec-8ba5-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:13:48.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-sf8sl" to be "success or failure" May 1 12:13:48.722: INFO: Pod "pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.719245ms May 1 12:13:50.726: INFO: Pod "pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007854246s May 1 12:13:52.730: INFO: Pod "pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011811352s STEP: Saw pod success May 1 12:13:52.730: INFO: Pod "pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:13:52.733: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017 container env-test: STEP: delete the pod May 1 12:13:52.770: INFO: Waiting for pod pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:13:52.776: INFO: Pod pod-configmaps-3669b002-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:13:52.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sf8sl" for this suite. May 1 12:14:01.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:14:01.355: INFO: namespace: e2e-tests-secrets-sf8sl, resource: bindings, ignored listing per whitelist May 1 12:14:01.399: INFO: namespace e2e-tests-secrets-sf8sl deletion completed in 8.619072978s • [SLOW TEST:12.814 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:14:01.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3f05d754-8ba5-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:14:03.375: INFO: Waiting up to 5m0s for pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-95fp6" to be "success or failure" May 1 12:14:03.395: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.593364ms May 1 12:14:05.399: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024702019s May 1 12:14:07.671: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296127907s May 1 12:14:09.675: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300016032s May 1 12:14:12.007: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631975833s May 1 12:14:14.011: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.636402464s STEP: Saw pod success May 1 12:14:14.011: INFO: Pod "pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:14:14.014: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017 container secret-env-test: STEP: delete the pod May 1 12:14:14.237: INFO: Waiting for pod pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:14:14.389: INFO: Pod pod-secrets-3f285be8-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:14:14.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-95fp6" for this suite. May 1 12:14:20.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:14:20.493: INFO: namespace: e2e-tests-secrets-95fp6, resource: bindings, ignored listing per whitelist May 1 12:14:20.542: INFO: namespace e2e-tests-secrets-95fp6 deletion completed in 6.147829029s • [SLOW TEST:19.143 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:14:20.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 1 12:14:20.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-74nmg run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 1 12:14:26.051: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0501 12:14:25.973708 1483 log.go:172] (0xc000732370) (0xc0007ce5a0) Create stream\nI0501 12:14:25.973864 1483 log.go:172] (0xc000732370) (0xc0007ce5a0) Stream added, broadcasting: 1\nI0501 12:14:25.977004 1483 log.go:172] (0xc000732370) Reply frame received for 1\nI0501 12:14:25.977072 1483 log.go:172] (0xc000732370) (0xc0002fa000) Create stream\nI0501 12:14:25.977093 1483 log.go:172] (0xc000732370) (0xc0002fa000) Stream added, broadcasting: 3\nI0501 12:14:25.978263 1483 log.go:172] (0xc000732370) Reply frame received for 3\nI0501 12:14:25.978336 1483 log.go:172] (0xc000732370) (0xc000578000) Create stream\nI0501 12:14:25.978356 1483 log.go:172] (0xc000732370) (0xc000578000) Stream added, broadcasting: 5\nI0501 12:14:25.979251 1483 log.go:172] (0xc000732370) Reply frame received for 5\nI0501 12:14:25.979303 1483 log.go:172] (0xc000732370) (0xc0007ce640) Create stream\nI0501 12:14:25.979314 1483 log.go:172] (0xc000732370) (0xc0007ce640) Stream added, broadcasting: 7\nI0501 12:14:25.980442 1483 log.go:172] (0xc000732370) Reply frame received for 7\nI0501 12:14:25.980663 1483 log.go:172] (0xc0002fa000) (3) Writing data frame\nI0501 12:14:25.980824 1483 log.go:172] (0xc0002fa000) (3) Writing data frame\nI0501 12:14:25.981953 1483 log.go:172] (0xc000732370) Data frame received for 5\nI0501 12:14:25.981971 1483 log.go:172] (0xc000578000) (5) Data frame handling\nI0501 12:14:25.981987 1483 log.go:172] (0xc000578000) (5) Data frame sent\nI0501 12:14:25.982593 1483 log.go:172] (0xc000732370) Data frame received for 5\nI0501 12:14:25.982618 1483 log.go:172] (0xc000578000) (5) Data frame handling\nI0501 12:14:25.982648 1483 log.go:172] (0xc000578000) (5) Data frame sent\nI0501 12:14:26.027488 1483 log.go:172] (0xc000732370) Data frame received for 5\nI0501 12:14:26.027522 1483 log.go:172] (0xc000578000) (5) Data frame handling\nI0501 12:14:26.027545 1483 log.go:172] (0xc000732370) Data frame received for 7\nI0501 12:14:26.027558 1483 log.go:172] (0xc0007ce640) (7) Data frame handling\nI0501 12:14:26.028491 1483 log.go:172] (0xc000732370) (0xc0002fa000) Stream removed, broadcasting: 3\nI0501 12:14:26.028696 1483 log.go:172] (0xc000732370) Data frame received for 1\nI0501 12:14:26.028735 1483 log.go:172] (0xc0007ce5a0) (1) Data frame handling\nI0501 12:14:26.028772 1483 log.go:172] (0xc0007ce5a0) (1) Data frame sent\nI0501 12:14:26.028792 1483 log.go:172] (0xc000732370) (0xc0007ce5a0) Stream removed, broadcasting: 1\nI0501 12:14:26.028810 1483 log.go:172] (0xc000732370) Go away received\nI0501 12:14:26.028993 1483 log.go:172] (0xc000732370) (0xc0007ce5a0) Stream removed, broadcasting: 1\nI0501 12:14:26.029021 1483 log.go:172] (0xc000732370) (0xc0002fa000) Stream removed, broadcasting: 3\nI0501 12:14:26.029032 1483 log.go:172] (0xc000732370) (0xc000578000) Stream removed, broadcasting: 5\nI0501 12:14:26.029047 1483 log.go:172] (0xc000732370) (0xc0007ce640) Stream removed, broadcasting: 7\n" May 1 12:14:26.051: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:14:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-74nmg" for this suite. May 1 12:14:36.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:14:36.237: INFO: namespace: e2e-tests-kubectl-74nmg, resource: bindings, ignored listing per whitelist May 1 12:14:36.261: INFO: namespace e2e-tests-kubectl-74nmg deletion completed in 8.102871359s • [SLOW TEST:15.719 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:14:36.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 12:14:48.552: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:48.606: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:14:50.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:50.610: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:14:52.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:52.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:14:54.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:54.623: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:14:56.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:56.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:14:58.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:14:58.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:00.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:00.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:02.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:02.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:04.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:04.762: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:06.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:06.609: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:08.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:08.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:10.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:10.611: INFO: Pod pod-with-poststart-exec-hook still exists May 1 12:15:12.606: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 1 12:15:12.610: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:15:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6x4bk" for this suite. May 1 12:15:34.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:15:34.770: INFO: namespace: e2e-tests-container-lifecycle-hook-6x4bk, resource: bindings, ignored listing per whitelist May 1 12:15:34.773: INFO: namespace e2e-tests-container-lifecycle-hook-6x4bk deletion completed in 22.158435182s • [SLOW TEST:58.512 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:15:34.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:15:34.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-978rt" to be "success or failure" May 1 12:15:34.881: INFO: Pod "downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177378ms May 1 12:15:36.885: INFO: Pod "downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008779278s May 1 12:15:38.890: INFO: Pod "downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013500315s STEP: Saw pod success May 1 12:15:38.890: INFO: Pod "downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:15:38.894: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:15:38.923: INFO: Waiting for pod downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:15:38.934: INFO: Pod downwardapi-volume-75b0551e-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:15:38.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-978rt" for this suite. May 1 12:15:44.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:15:45.036: INFO: namespace: e2e-tests-downward-api-978rt, resource: bindings, ignored listing per whitelist May 1 12:15:45.048: INFO: namespace e2e-tests-downward-api-978rt deletion completed in 6.110815311s • [SLOW TEST:10.275 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:15:45.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 12:15:45.152: INFO: Waiting up to 5m0s for pod "pod-7bd22570-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-lfjw4" to be "success or failure" May 1 12:15:45.190: INFO: Pod "pod-7bd22570-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 37.757309ms May 1 12:15:47.193: INFO: Pod "pod-7bd22570-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041213734s May 1 12:15:49.196: INFO: Pod "pod-7bd22570-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044346275s STEP: Saw pod success May 1 12:15:49.196: INFO: Pod "pod-7bd22570-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:15:49.199: INFO: Trying to get logs from node hunter-worker pod pod-7bd22570-8ba5-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:15:49.229: INFO: Waiting for pod pod-7bd22570-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:15:49.233: INFO: Pod pod-7bd22570-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:15:49.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lfjw4" for this suite. May 1 12:15:55.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:15:55.343: INFO: namespace: e2e-tests-emptydir-lfjw4, resource: bindings, ignored listing per whitelist May 1 12:15:55.423: INFO: namespace e2e-tests-emptydir-lfjw4 deletion completed in 6.186447975s • [SLOW TEST:10.375 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:15:55.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 1 12:15:55.577: INFO: Waiting up to 5m0s for pod "client-containers-820680d9-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-containers-qxckx" to be "success or failure" May 1 12:15:55.589: INFO: Pod "client-containers-820680d9-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.138683ms May 1 12:15:57.941: INFO: Pod "client-containers-820680d9-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.364257551s May 1 12:15:59.946: INFO: Pod "client-containers-820680d9-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.369067355s STEP: Saw pod success May 1 12:15:59.946: INFO: Pod "client-containers-820680d9-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:15:59.949: INFO: Trying to get logs from node hunter-worker2 pod client-containers-820680d9-8ba5-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:15:59.968: INFO: Waiting for pod client-containers-820680d9-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:15:59.972: INFO: Pod client-containers-820680d9-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:15:59.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qxckx" for this suite. May 1 12:16:06.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:16:06.109: INFO: namespace: e2e-tests-containers-qxckx, resource: bindings, ignored listing per whitelist May 1 12:16:06.127: INFO: namespace e2e-tests-containers-qxckx deletion completed in 6.1521788s • [SLOW TEST:10.704 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:16:06.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-885f118a-8ba5-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:16:06.226: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-jx6fs" to be "success or failure" May 1 12:16:06.234: INFO: Pod "pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.667561ms May 1 12:16:08.238: INFO: Pod "pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011660278s May 1 12:16:10.247: INFO: Pod "pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020614149s STEP: Saw pod success May 1 12:16:10.247: INFO: Pod "pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:16:10.250: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 12:16:10.278: INFO: Waiting for pod pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:16:10.514: INFO: Pod pod-projected-secrets-885f852c-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:16:10.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jx6fs" for this suite. May 1 12:16:16.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:16:16.621: INFO: namespace: e2e-tests-projected-jx6fs, resource: bindings, ignored listing per whitelist May 1 12:16:16.641: INFO: namespace e2e-tests-projected-jx6fs deletion completed in 6.123049864s • [SLOW TEST:10.514 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:16:16.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qt8qs [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-qt8qs STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-qt8qs STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-qt8qs STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-qt8qs May 1 12:16:22.809: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qt8qs, name: ss-0, uid: 915f71e9-8ba5-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 1 12:16:31.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qt8qs, name: ss-0, uid: 915f71e9-8ba5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 1 12:16:31.337: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qt8qs, name: ss-0, uid: 915f71e9-8ba5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 1 12:16:31.358: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-qt8qs STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-qt8qs STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-qt8qs and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 12:16:35.525: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qt8qs May 1 12:16:35.528: INFO: Scaling statefulset ss to 0 May 1 12:16:55.544: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:16:55.548: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:16:55.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qt8qs" for this suite. May 1 12:17:01.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:17:01.665: INFO: namespace: e2e-tests-statefulset-qt8qs, resource: bindings, ignored listing per whitelist May 1 12:17:01.713: INFO: namespace e2e-tests-statefulset-qt8qs deletion completed in 6.148043847s • [SLOW TEST:45.071 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:17:01.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 1 12:17:01.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t6tg6' May 1 12:17:04.434: INFO: stderr: "" May 1 12:17:04.434: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 1 12:17:05.439: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:05.439: INFO: Found 0 / 1 May 1 12:17:06.598: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:06.598: INFO: Found 0 / 1 May 1 12:17:07.439: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:07.439: INFO: Found 0 / 1 May 1 12:17:08.685: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:08.685: INFO: Found 0 / 1 May 1 12:17:09.438: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:09.438: INFO: Found 1 / 1 May 1 12:17:09.438: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 12:17:09.442: INFO: Selector matched 1 pods for map[app:redis] May 1 12:17:09.442: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 1 12:17:09.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6' May 1 12:17:09.558: INFO: stderr: "" May 1 12:17:09.558: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 12:17:08.150 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 12:17:08.150 # Server started, Redis version 3.2.12\n1:M 01 May 12:17:08.150 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 12:17:08.150 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 1 12:17:09.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6 --tail=1' May 1 12:17:09.685: INFO: stderr: "" May 1 12:17:09.686: INFO: stdout: "1:M 01 May 12:17:08.150 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 1 12:17:09.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6 --limit-bytes=1' May 1 12:17:09.811: INFO: stderr: "" May 1 12:17:09.811: INFO: stdout: " " STEP: exposing timestamps May 1 12:17:09.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6 --tail=1 --timestamps' May 1 12:17:09.914: INFO: stderr: "" May 1 12:17:09.914: INFO: stdout: "2020-05-01T12:17:08.150427866Z 1:M 01 May 12:17:08.150 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 1 12:17:12.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6 --since=1s' May 1 12:17:12.536: INFO: stderr: "" May 1 12:17:12.536: INFO: stdout: "" May 1 12:17:12.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qf4gn redis-master --namespace=e2e-tests-kubectl-t6tg6 --since=24h' May 1 12:17:12.643: INFO: stderr: "" May 1 12:17:12.643: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 12:17:08.150 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 12:17:08.150 # Server started, Redis version 3.2.12\n1:M 01 May 12:17:08.150 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 12:17:08.150 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 1 12:17:12.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t6tg6' May 1 12:17:12.780: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:17:12.780: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 1 12:17:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-t6tg6' May 1 12:17:12.875: INFO: stderr: "No resources found.\n" May 1 12:17:12.875: INFO: stdout: "" May 1 12:17:12.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-t6tg6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 12:17:12.962: INFO: stderr: "" May 1 12:17:12.962: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:17:12.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t6tg6" for this suite. May 1 12:17:35.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:17:35.379: INFO: namespace: e2e-tests-kubectl-t6tg6, resource: bindings, ignored listing per whitelist May 1 12:17:35.430: INFO: namespace e2e-tests-kubectl-t6tg6 deletion completed in 22.46487043s • [SLOW TEST:33.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:17:35.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:17:35.626: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 1 12:17:40.631: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 1 12:17:40.631: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 1 12:17:42.635: INFO: Creating deployment "test-rollover-deployment" May 1 12:17:42.651: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 1 12:17:44.656: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 1 12:17:44.661: INFO: Ensure that both replica sets have 1 created replica May 1 12:17:44.666: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 1 12:17:44.671: INFO: Updating deployment test-rollover-deployment May 1 12:17:44.671: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 1 12:17:46.739: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 1 12:17:46.744: INFO: Make sure deployment "test-rollover-deployment" is complete May 1 12:17:46.749: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:46.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932264, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:48.758: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:48.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:50.758: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:50.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:52.757: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:52.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:54.756: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:54.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:56.758: INFO: all replica sets need to contain the pod-template-hash label May 1 12:17:56.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932268, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723932262, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 12:17:58.888: INFO: May 1 12:17:58.888: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 1 12:17:58.896: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-b2658,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2658/deployments/test-rollover-deployment,UID:c1da0413-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8164612,Generation:2,CreationTimestamp:2020-05-01 12:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-01 12:17:42 +0000 UTC 2020-05-01 12:17:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-01 12:17:58 +0000 UTC 2020-05-01 12:17:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 1 12:17:58.900: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-b2658,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2658/replicasets/test-rollover-deployment-5b8479fdb6,UID:c310a1cc-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8164602,Generation:2,CreationTimestamp:2020-05-01 12:17:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1da0413-8ba5-11ea-99e8-0242ac110002 0xc001b15547 0xc001b15548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 1 12:17:58.900: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 1 12:17:58.900: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-b2658,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2658/replicasets/test-rollover-controller,UID:bd9f9a70-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8164611,Generation:2,CreationTimestamp:2020-05-01 12:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1da0413-8ba5-11ea-99e8-0242ac110002 0xc001b15197 0xc001b15198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 12:17:58.900: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-b2658,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2658/replicasets/test-rollover-deployment-58494b7559,UID:c1dd596e-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8164568,Generation:2,CreationTimestamp:2020-05-01 12:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c1da0413-8ba5-11ea-99e8-0242ac110002 0xc001b15447 0xc001b15448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 1 12:17:58.904: INFO: Pod "test-rollover-deployment-5b8479fdb6-bbmrg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-bbmrg,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-b2658,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b2658/pods/test-rollover-deployment-5b8479fdb6-bbmrg,UID:c31ca24d-8ba5-11ea-99e8-0242ac110002,ResourceVersion:8164580,Generation:0,CreationTimestamp:2020-05-01 12:17:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 c310a1cc-8ba5-11ea-99e8-0242ac110002 0xc001cdbe27 0xc001cdbe28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-j7whh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j7whh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-j7whh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdbf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdbf40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:17:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:17:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:17:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 12:17:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.141,StartTime:2020-05-01 12:17:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-01 12:17:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2eb579d3b30d61976ea3deb1aed6dbebafa6c58f08474a38b4ff1e48a05deca3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:17:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-b2658" for this suite. May 1 12:18:04.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:18:04.993: INFO: namespace: e2e-tests-deployment-b2658, resource: bindings, ignored listing per whitelist May 1 12:18:05.044: INFO: namespace e2e-tests-deployment-b2658 deletion completed in 6.136688448s • [SLOW TEST:29.613 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:18:05.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 1 12:18:10.630: INFO: Pod pod-hostip-cf870872-8ba5-11ea-88a3-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:18:10.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zjfm6" for this suite. May 1 12:18:32.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:18:32.739: INFO: namespace: e2e-tests-pods-zjfm6, resource: bindings, ignored listing per whitelist May 1 12:18:32.745: INFO: namespace e2e-tests-pods-zjfm6 deletion completed in 22.111954741s • [SLOW TEST:27.702 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:18:32.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 1 12:18:32.916: INFO: Waiting up to 5m0s for pod "var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-var-expansion-mcpc4" to be "success or failure" May 1 12:18:32.921: INFO: Pod "var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508037ms May 1 12:18:34.937: INFO: Pod "var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021280681s May 1 12:18:36.942: INFO: Pod "var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025927451s STEP: Saw pod success May 1 12:18:36.942: INFO: Pod "var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:18:36.945: INFO: Trying to get logs from node hunter-worker pod var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:18:36.985: INFO: Waiting for pod var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:18:36.993: INFO: Pod var-expansion-dfcd1e37-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:18:36.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-mcpc4" for this suite. May 1 12:18:43.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:18:43.503: INFO: namespace: e2e-tests-var-expansion-mcpc4, resource: bindings, ignored listing per whitelist May 1 12:18:43.511: INFO: namespace e2e-tests-var-expansion-mcpc4 deletion completed in 6.513857118s • [SLOW TEST:10.765 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:18:43.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:18:43.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-2x44n" to be "success or failure" May 1 12:18:43.754: INFO: Pod "downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.499484ms May 1 12:18:45.757: INFO: Pod "downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02083028s May 1 12:18:47.770: INFO: Pod "downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033491942s STEP: Saw pod success May 1 12:18:47.770: INFO: Pod "downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:18:47.772: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:18:47.802: INFO: Waiting for pod downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:18:47.807: INFO: Pod downwardapi-volume-e6415dd2-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:18:47.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2x44n" for this suite. May 1 12:18:53.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:18:53.920: INFO: namespace: e2e-tests-downward-api-2x44n, resource: bindings, ignored listing per whitelist May 1 12:18:53.944: INFO: namespace e2e-tests-downward-api-2x44n deletion completed in 6.13505108s • [SLOW TEST:10.433 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:18:53.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 1 12:18:54.113: INFO: Waiting up to 5m0s for pod "pod-ec6ce426-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-6xj58" to be "success or failure" May 1 12:18:54.118: INFO: Pod "pod-ec6ce426-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.093365ms May 1 12:18:56.123: INFO: Pod "pod-ec6ce426-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009861998s May 1 12:18:58.127: INFO: Pod "pod-ec6ce426-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013867403s STEP: Saw pod success May 1 12:18:58.127: INFO: Pod "pod-ec6ce426-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:18:58.130: INFO: Trying to get logs from node hunter-worker pod pod-ec6ce426-8ba5-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:18:58.151: INFO: Waiting for pod pod-ec6ce426-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:18:58.185: INFO: Pod pod-ec6ce426-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:18:58.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6xj58" for this suite. May 1 12:19:04.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:19:04.274: INFO: namespace: e2e-tests-emptydir-6xj58, resource: bindings, ignored listing per whitelist May 1 12:19:04.283: INFO: namespace e2e-tests-emptydir-6xj58 deletion completed in 6.094443438s • [SLOW TEST:10.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:19:04.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 12:19:04.391: INFO: Waiting up to 5m0s for pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-7hdf2" to be "success or failure" May 1 12:19:04.406: INFO: Pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.280124ms May 1 12:19:06.410: INFO: Pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019619548s May 1 12:19:08.722: INFO: Pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331659198s May 1 12:19:10.726: INFO: Pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.335053327s STEP: Saw pod success May 1 12:19:10.726: INFO: Pod "downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:19:10.728: INFO: Trying to get logs from node hunter-worker pod downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:19:10.767: INFO: Waiting for pod downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:19:10.842: INFO: Pod downward-api-f2939b6d-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:19:10.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7hdf2" for this suite. May 1 12:19:16.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:19:16.906: INFO: namespace: e2e-tests-downward-api-7hdf2, resource: bindings, ignored listing per whitelist May 1 12:19:16.957: INFO: namespace e2e-tests-downward-api-7hdf2 deletion completed in 6.110972505s • [SLOW TEST:12.673 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:19:16.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-fa22ca56-8ba5-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:19:17.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-8wgxz" to be "success or failure" May 1 12:19:17.171: INFO: Pod "pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 62.179637ms May 1 12:19:19.174: INFO: Pod "pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0653691s May 1 12:19:21.179: INFO: Pod "pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06961409s STEP: Saw pod success May 1 12:19:21.179: INFO: Pod "pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:19:21.182: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 1 12:19:21.257: INFO: Waiting for pod pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017 to disappear May 1 12:19:21.345: INFO: Pod pod-projected-secrets-fa272c12-8ba5-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:19:21.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8wgxz" for this suite. May 1 12:19:27.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:19:27.456: INFO: namespace: e2e-tests-projected-8wgxz, resource: bindings, ignored listing per whitelist May 1 12:19:27.488: INFO: namespace e2e-tests-projected-8wgxz deletion completed in 6.140297886s • [SLOW TEST:10.531 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:19:27.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-006fce12-8ba6-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:19:27.660: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-tjgr6" to be "success or failure" May 1 12:19:27.699: INFO: Pod "pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 38.262734ms May 1 12:19:29.703: INFO: Pod "pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042777441s May 1 12:19:31.706: INFO: Pod "pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045676728s STEP: Saw pod success May 1 12:19:31.706: INFO: Pod "pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:19:31.709: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 12:19:31.792: INFO: Waiting for pod pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:19:31.796: INFO: Pod pod-projected-secrets-00721000-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:19:31.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tjgr6" for this suite. May 1 12:19:37.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:19:37.874: INFO: namespace: e2e-tests-projected-tjgr6, resource: bindings, ignored listing per whitelist May 1 12:19:37.922: INFO: namespace e2e-tests-projected-tjgr6 deletion completed in 6.121658938s • [SLOW TEST:10.433 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:19:37.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:19:38.163: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"06a7025d-8ba6-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001f5a722), BlockOwnerDeletion:(*bool)(0xc001f5a723)}} May 1 12:19:38.244: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"06a26d7d-8ba6-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001f10e1a), BlockOwnerDeletion:(*bool)(0xc001f10e1b)}} May 1 12:19:38.276: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"06a2f48a-8ba6-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001e50bb2), BlockOwnerDeletion:(*bool)(0xc001e50bb3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:19:43.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-njh7n" for this suite. May 1 12:19:49.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:19:49.544: INFO: namespace: e2e-tests-gc-njh7n, resource: bindings, ignored listing per whitelist May 1 12:19:49.544: INFO: namespace e2e-tests-gc-njh7n deletion completed in 6.109507965s • [SLOW TEST:11.622 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:19:49.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 1 12:19:57.722: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 12:19:57.731: INFO: Pod pod-with-poststart-http-hook still exists May 1 12:19:59.731: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 12:19:59.855: INFO: Pod pod-with-poststart-http-hook still exists May 1 12:20:01.731: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 12:20:01.738: INFO: Pod pod-with-poststart-http-hook still exists May 1 12:20:03.731: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 1 12:20:06.898: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:20:06.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4jg5z" for this suite. May 1 12:20:29.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:20:29.264: INFO: namespace: e2e-tests-container-lifecycle-hook-4jg5z, resource: bindings, ignored listing per whitelist May 1 12:20:29.344: INFO: namespace e2e-tests-container-lifecycle-hook-4jg5z deletion completed in 22.186306879s • [SLOW TEST:39.799 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:20:29.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rtgf2.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rtgf2.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rtgf2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rtgf2.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rtgf2.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rtgf2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 12:20:37.979: INFO: DNS probes using e2e-tests-dns-rtgf2/dns-test-25460d57-8ba6-11ea-88a3-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:20:38.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-rtgf2" for this suite. May 1 12:20:44.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:20:44.354: INFO: namespace: e2e-tests-dns-rtgf2, resource: bindings, ignored listing per whitelist May 1 12:20:44.395: INFO: namespace e2e-tests-dns-rtgf2 deletion completed in 6.14147231s • [SLOW TEST:15.051 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:20:44.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:20:44.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-z8mx2" to be "success or failure" May 1 12:20:44.540: INFO: Pod "downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.595046ms May 1 12:20:46.544: INFO: Pod "downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009746304s May 1 12:20:48.548: INFO: Pod "downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013517581s STEP: Saw pod success May 1 12:20:48.548: INFO: Pod "downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:20:48.551: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:20:48.565: INFO: Waiting for pod downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:20:48.570: INFO: Pod downwardapi-volume-2e445a7f-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:20:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8mx2" for this suite. May 1 12:20:54.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:20:54.641: INFO: namespace: e2e-tests-projected-z8mx2, resource: bindings, ignored listing per whitelist May 1 12:20:54.704: INFO: namespace e2e-tests-projected-z8mx2 deletion completed in 6.105263446s • [SLOW TEST:10.309 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:20:54.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-3466d45e-8ba6-11ea-88a3-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-3466d44b-8ba6-11ea-88a3-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 1 12:20:54.835: INFO: Waiting up to 5m0s for pod "projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-ndrdc" to be "success or failure" May 1 12:20:54.839: INFO: Pod "projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033988ms May 1 12:20:56.844: INFO: Pod "projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008551862s May 1 12:20:58.848: INFO: Pod "projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012863277s STEP: Saw pod success May 1 12:20:58.848: INFO: Pod "projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:20:58.852: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 1 12:20:58.877: INFO: Waiting for pod projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:20:58.888: INFO: Pod projected-volume-3466d418-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:20:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ndrdc" for this suite. May 1 12:21:04.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:21:04.959: INFO: namespace: e2e-tests-projected-ndrdc, resource: bindings, ignored listing per whitelist May 1 12:21:04.991: INFO: namespace e2e-tests-projected-ndrdc deletion completed in 6.099960442s • [SLOW TEST:10.287 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:21:04.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-3ac4d311-8ba6-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3ac4d311-8ba6-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:21:12.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kt986" for this suite. May 1 12:21:34.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:21:34.150: INFO: namespace: e2e-tests-configmap-kt986, resource: bindings, ignored listing per whitelist May 1 12:21:34.164: INFO: namespace e2e-tests-configmap-kt986 deletion completed in 22.087914128s • [SLOW TEST:29.173 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:21:34.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4be8f94c-8ba6-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:21:34.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-2r9r5" to be "success or failure" May 1 12:21:34.306: INFO: Pod "pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.234928ms May 1 12:21:36.310: INFO: Pod "pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021292887s May 1 12:21:38.359: INFO: Pod "pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070169933s STEP: Saw pod success May 1 12:21:38.359: INFO: Pod "pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:21:38.362: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 12:21:38.393: INFO: Waiting for pod pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:21:38.398: INFO: Pod pod-configmaps-4be9af09-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:21:38.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2r9r5" for this suite. May 1 12:21:44.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:21:44.504: INFO: namespace: e2e-tests-configmap-2r9r5, resource: bindings, ignored listing per whitelist May 1 12:21:44.505: INFO: namespace e2e-tests-configmap-2r9r5 deletion completed in 6.103735814s • [SLOW TEST:10.340 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:21:44.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:21:44.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-5gfpv" to be "success or failure" May 1 12:21:44.638: INFO: Pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099245ms May 1 12:21:46.642: INFO: Pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008632232s May 1 12:21:48.784: INFO: Pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.150695652s May 1 12:21:50.790: INFO: Pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155863075s STEP: Saw pod success May 1 12:21:50.790: INFO: Pod "downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:21:50.793: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:21:50.851: INFO: Waiting for pod downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:21:50.859: INFO: Pod downwardapi-volume-521655d4-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:21:50.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5gfpv" for this suite. May 1 12:21:56.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:21:56.977: INFO: namespace: e2e-tests-projected-5gfpv, resource: bindings, ignored listing per whitelist May 1 12:21:56.997: INFO: namespace e2e-tests-projected-5gfpv deletion completed in 6.135305542s • [SLOW TEST:12.492 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:21:56.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:21:57.128: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-4x5tp" to be "success or failure" May 1 12:21:57.181: INFO: Pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 52.636936ms May 1 12:21:59.263: INFO: Pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134727401s May 1 12:22:01.293: INFO: Pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165312875s May 1 12:22:03.298: INFO: Pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.170247196s STEP: Saw pod success May 1 12:22:03.298: INFO: Pod "downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:22:03.301: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:22:03.372: INFO: Waiting for pod downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017 to disappear May 1 12:22:03.444: INFO: Pod downwardapi-volume-5982fc12-8ba6-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:22:03.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4x5tp" for this suite. May 1 12:22:11.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:22:11.516: INFO: namespace: e2e-tests-downward-api-4x5tp, resource: bindings, ignored listing per whitelist May 1 12:22:11.576: INFO: namespace e2e-tests-downward-api-4x5tp deletion completed in 8.112916885s • [SLOW TEST:14.579 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:22:11.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-628d9aef-8ba6-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:22:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tmnhr" for this suite. May 1 12:22:44.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:22:44.770: INFO: namespace: e2e-tests-configmap-tmnhr, resource: bindings, ignored listing per whitelist May 1 12:22:44.824: INFO: namespace e2e-tests-configmap-tmnhr deletion completed in 22.13764779s • [SLOW TEST:33.247 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:22:44.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-76338915-8ba6-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-76338915-8ba6-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:22:51.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q2x98" for this suite. May 1 12:23:17.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:23:17.572: INFO: namespace: e2e-tests-projected-q2x98, resource: bindings, ignored listing per whitelist May 1 12:23:17.618: INFO: namespace e2e-tests-projected-q2x98 deletion completed in 26.235616145s • [SLOW TEST:32.795 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:23:17.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 1 12:23:19.243: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix272490716/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:23:19.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pf2rt" for this suite. May 1 12:23:27.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:23:27.558: INFO: namespace: e2e-tests-kubectl-pf2rt, resource: bindings, ignored listing per whitelist May 1 12:23:27.575: INFO: namespace e2e-tests-kubectl-pf2rt deletion completed in 8.122980819s • [SLOW TEST:9.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:23:27.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:23:28.051: INFO: Creating ReplicaSet my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017 May 1 12:23:28.246: INFO: Pod name my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017: Found 0 pods out of 1 May 1 12:23:33.250: INFO: Pod name my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017: Found 1 pods out of 1 May 1 12:23:33.251: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017" is running May 1 12:23:37.256: INFO: Pod "my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017-d7fhs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:23:28 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:23:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:23:28 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:23:28 +0000 UTC Reason: Message:}]) May 1 12:23:37.256: INFO: Trying to dial the pod May 1 12:23:42.302: INFO: Controller my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017: Got expected result from replica 1 [my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017-d7fhs]: "my-hostname-basic-8fbbf9d4-8ba6-11ea-88a3-0242ac110017-d7fhs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:23:42.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-hht9t" for this suite. May 1 12:23:48.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:23:48.487: INFO: namespace: e2e-tests-replicaset-hht9t, resource: bindings, ignored listing per whitelist May 1 12:23:48.712: INFO: namespace e2e-tests-replicaset-hht9t deletion completed in 6.403831434s • [SLOW TEST:21.137 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:23:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2ss2d [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 1 12:23:49.438: INFO: Found 0 stateful pods, waiting for 3 May 1 12:23:59.443: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:23:59.443: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:23:59.443: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 12:24:09.443: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:24:09.443: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:24:09.443: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 1 12:24:09.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ss2d ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:24:09.693: INFO: stderr: "I0501 12:24:09.575141 1756 log.go:172] (0xc000866210) (0xc00084e5a0) Create stream\nI0501 12:24:09.575211 1756 log.go:172] (0xc000866210) (0xc00084e5a0) Stream added, broadcasting: 1\nI0501 12:24:09.578736 1756 log.go:172] (0xc000866210) Reply frame received for 1\nI0501 12:24:09.578777 1756 log.go:172] (0xc000866210) (0xc00084e640) Create stream\nI0501 12:24:09.578790 1756 log.go:172] (0xc000866210) (0xc00084e640) Stream added, broadcasting: 3\nI0501 12:24:09.579860 1756 log.go:172] (0xc000866210) Reply frame received for 3\nI0501 12:24:09.579908 1756 log.go:172] (0xc000866210) (0xc0006d8000) Create stream\nI0501 12:24:09.579924 1756 log.go:172] (0xc000866210) (0xc0006d8000) Stream added, broadcasting: 5\nI0501 12:24:09.580833 1756 log.go:172] (0xc000866210) Reply frame received for 5\nI0501 12:24:09.682933 1756 log.go:172] (0xc000866210) Data frame received for 5\nI0501 12:24:09.682968 1756 log.go:172] (0xc0006d8000) (5) Data frame handling\nI0501 12:24:09.683014 1756 log.go:172] (0xc000866210) Data frame received for 3\nI0501 12:24:09.683048 1756 log.go:172] (0xc00084e640) (3) Data frame handling\nI0501 12:24:09.683071 1756 log.go:172] (0xc00084e640) (3) Data frame sent\nI0501 12:24:09.683092 1756 log.go:172] (0xc000866210) Data frame received for 3\nI0501 12:24:09.683102 1756 log.go:172] (0xc00084e640) (3) Data frame handling\nI0501 12:24:09.684953 1756 log.go:172] (0xc000866210) Data frame received for 1\nI0501 12:24:09.684982 1756 log.go:172] (0xc00084e5a0) (1) Data frame handling\nI0501 12:24:09.685006 1756 log.go:172] (0xc00084e5a0) (1) Data frame sent\nI0501 12:24:09.685052 1756 log.go:172] (0xc000866210) (0xc00084e5a0) Stream removed, broadcasting: 1\nI0501 12:24:09.685323 1756 log.go:172] (0xc000866210) Go away received\nI0501 12:24:09.685609 1756 log.go:172] (0xc000866210) (0xc00084e5a0) Stream removed, broadcasting: 1\nI0501 12:24:09.685644 1756 log.go:172] (0xc000866210) (0xc00084e640) Stream removed, broadcasting: 3\nI0501 12:24:09.685668 1756 log.go:172] (0xc000866210) (0xc0006d8000) Stream removed, broadcasting: 5\n" May 1 12:24:09.693: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:24:09.693: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 12:24:19.722: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 1 12:24:29.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ss2d ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:24:30.027: INFO: stderr: "I0501 12:24:29.971724 1778 log.go:172] (0xc00016c790) (0xc0006132c0) Create stream\nI0501 12:24:29.971784 1778 log.go:172] (0xc00016c790) (0xc0006132c0) Stream added, broadcasting: 1\nI0501 12:24:29.974143 1778 log.go:172] (0xc00016c790) Reply frame received for 1\nI0501 12:24:29.974178 1778 log.go:172] (0xc00016c790) (0xc000726000) Create stream\nI0501 12:24:29.974187 1778 log.go:172] (0xc00016c790) (0xc000726000) Stream added, broadcasting: 3\nI0501 12:24:29.974914 1778 log.go:172] (0xc00016c790) Reply frame received for 3\nI0501 12:24:29.974942 1778 log.go:172] (0xc00016c790) (0xc00070e000) Create stream\nI0501 12:24:29.974950 1778 log.go:172] (0xc00016c790) (0xc00070e000) Stream added, broadcasting: 5\nI0501 12:24:29.975697 1778 log.go:172] (0xc00016c790) Reply frame received for 5\nI0501 12:24:30.023748 1778 log.go:172] (0xc00016c790) Data frame received for 3\nI0501 12:24:30.023771 1778 log.go:172] (0xc000726000) (3) Data frame handling\nI0501 12:24:30.023777 1778 log.go:172] (0xc000726000) (3) Data frame sent\nI0501 12:24:30.023782 1778 log.go:172] (0xc00016c790) Data frame received for 3\nI0501 12:24:30.023786 1778 log.go:172] (0xc000726000) (3) Data frame handling\nI0501 12:24:30.023815 1778 log.go:172] (0xc00016c790) Data frame received for 5\nI0501 12:24:30.023836 1778 log.go:172] (0xc00070e000) (5) Data frame handling\nI0501 12:24:30.024721 1778 log.go:172] (0xc00016c790) Data frame received for 1\nI0501 12:24:30.024740 1778 log.go:172] (0xc0006132c0) (1) Data frame handling\nI0501 12:24:30.024761 1778 log.go:172] (0xc0006132c0) (1) Data frame sent\nI0501 12:24:30.024809 1778 log.go:172] (0xc00016c790) (0xc0006132c0) Stream removed, broadcasting: 1\nI0501 12:24:30.024876 1778 log.go:172] (0xc00016c790) Go away received\nI0501 12:24:30.024972 1778 log.go:172] (0xc00016c790) (0xc0006132c0) Stream removed, broadcasting: 1\nI0501 12:24:30.024986 1778 log.go:172] (0xc00016c790) (0xc000726000) Stream removed, broadcasting: 3\nI0501 12:24:30.024995 1778 log.go:172] (0xc00016c790) (0xc00070e000) Stream removed, broadcasting: 5\n" May 1 12:24:30.027: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:24:30.027: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:25:00.219: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ss2d/ss2 to complete update STEP: Rolling back to a previous revision May 1 12:25:10.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ss2d ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:25:10.508: INFO: stderr: "I0501 12:25:10.356720 1801 log.go:172] (0xc0008362c0) (0xc000637360) Create stream\nI0501 12:25:10.356770 1801 log.go:172] (0xc0008362c0) (0xc000637360) Stream added, broadcasting: 1\nI0501 12:25:10.358793 1801 log.go:172] (0xc0008362c0) Reply frame received for 1\nI0501 12:25:10.358848 1801 log.go:172] (0xc0008362c0) (0xc000678000) Create stream\nI0501 12:25:10.358861 1801 log.go:172] (0xc0008362c0) (0xc000678000) Stream added, broadcasting: 3\nI0501 12:25:10.359718 1801 log.go:172] (0xc0008362c0) Reply frame received for 3\nI0501 12:25:10.359746 1801 log.go:172] (0xc0008362c0) (0xc000416000) Create stream\nI0501 12:25:10.359758 1801 log.go:172] (0xc0008362c0) (0xc000416000) Stream added, broadcasting: 5\nI0501 12:25:10.360444 1801 log.go:172] (0xc0008362c0) Reply frame received for 5\nI0501 12:25:10.501815 1801 log.go:172] (0xc0008362c0) Data frame received for 3\nI0501 12:25:10.501867 1801 log.go:172] (0xc000678000) (3) Data frame handling\nI0501 12:25:10.501882 1801 log.go:172] (0xc000678000) (3) Data frame sent\nI0501 12:25:10.501900 1801 log.go:172] (0xc0008362c0) Data frame received for 3\nI0501 12:25:10.501928 1801 log.go:172] (0xc000678000) (3) Data frame handling\nI0501 12:25:10.501961 1801 log.go:172] (0xc0008362c0) Data frame received for 5\nI0501 12:25:10.501980 1801 log.go:172] (0xc000416000) (5) Data frame handling\nI0501 12:25:10.503862 1801 log.go:172] (0xc0008362c0) Data frame received for 1\nI0501 12:25:10.503894 1801 log.go:172] (0xc000637360) (1) Data frame handling\nI0501 12:25:10.503944 1801 log.go:172] (0xc000637360) (1) Data frame sent\nI0501 12:25:10.503974 1801 log.go:172] (0xc0008362c0) (0xc000637360) Stream removed, broadcasting: 1\nI0501 12:25:10.503992 1801 log.go:172] (0xc0008362c0) Go away received\nI0501 12:25:10.504247 1801 log.go:172] (0xc0008362c0) (0xc000637360) Stream removed, broadcasting: 1\nI0501 12:25:10.504274 1801 log.go:172] (0xc0008362c0) (0xc000678000) Stream removed, broadcasting: 3\nI0501 12:25:10.504291 1801 log.go:172] (0xc0008362c0) (0xc000416000) Stream removed, broadcasting: 5\n" May 1 12:25:10.508: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:25:10.508: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 12:25:20.542: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 1 12:25:30.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ss2d ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:25:30.775: INFO: stderr: "I0501 12:25:30.694209 1823 log.go:172] (0xc0007de2c0) (0xc000746640) Create stream\nI0501 12:25:30.694267 1823 log.go:172] (0xc0007de2c0) (0xc000746640) Stream added, broadcasting: 1\nI0501 12:25:30.696507 1823 log.go:172] (0xc0007de2c0) Reply frame received for 1\nI0501 12:25:30.696565 1823 log.go:172] (0xc0007de2c0) (0xc0007466e0) Create stream\nI0501 12:25:30.696583 1823 log.go:172] (0xc0007de2c0) (0xc0007466e0) Stream added, broadcasting: 3\nI0501 12:25:30.697965 1823 log.go:172] (0xc0007de2c0) Reply frame received for 3\nI0501 12:25:30.698018 1823 log.go:172] (0xc0007de2c0) (0xc000576d20) Create stream\nI0501 12:25:30.698041 1823 log.go:172] (0xc0007de2c0) (0xc000576d20) Stream added, broadcasting: 5\nI0501 12:25:30.699282 1823 log.go:172] (0xc0007de2c0) Reply frame received for 5\nI0501 12:25:30.769706 1823 log.go:172] (0xc0007de2c0) Data frame received for 3\nI0501 12:25:30.769744 1823 log.go:172] (0xc0007466e0) (3) Data frame handling\nI0501 12:25:30.769764 1823 log.go:172] (0xc0007466e0) (3) Data frame sent\nI0501 12:25:30.769773 1823 log.go:172] (0xc0007de2c0) Data frame received for 3\nI0501 12:25:30.769781 1823 log.go:172] (0xc0007466e0) (3) Data frame handling\nI0501 12:25:30.769874 1823 log.go:172] (0xc0007de2c0) Data frame received for 5\nI0501 12:25:30.769900 1823 log.go:172] (0xc000576d20) (5) Data frame handling\nI0501 12:25:30.771249 1823 log.go:172] (0xc0007de2c0) Data frame received for 1\nI0501 12:25:30.771270 1823 log.go:172] (0xc000746640) (1) Data frame handling\nI0501 12:25:30.771289 1823 log.go:172] (0xc000746640) (1) Data frame sent\nI0501 12:25:30.771306 1823 log.go:172] (0xc0007de2c0) (0xc000746640) Stream removed, broadcasting: 1\nI0501 12:25:30.771495 1823 log.go:172] (0xc0007de2c0) Go away received\nI0501 12:25:30.771567 1823 log.go:172] (0xc0007de2c0) (0xc000746640) Stream removed, broadcasting: 1\nI0501 12:25:30.771589 1823 log.go:172] (0xc0007de2c0) (0xc0007466e0) Stream removed, broadcasting: 3\nI0501 12:25:30.771605 1823 log.go:172] (0xc0007de2c0) (0xc000576d20) Stream removed, broadcasting: 5\n" May 1 12:25:30.775: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:25:30.775: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:25:40.794: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ss2d/ss2 to complete update May 1 12:25:40.794: INFO: Waiting for Pod e2e-tests-statefulset-2ss2d/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 12:25:40.794: INFO: Waiting for Pod e2e-tests-statefulset-2ss2d/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 12:25:50.801: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ss2d/ss2 to complete update May 1 12:25:50.801: INFO: Waiting for Pod e2e-tests-statefulset-2ss2d/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 12:25:50.801: INFO: Waiting for Pod e2e-tests-statefulset-2ss2d/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 1 12:26:00.805: INFO: Waiting for StatefulSet e2e-tests-statefulset-2ss2d/ss2 to complete update May 1 12:26:00.805: INFO: Waiting for Pod e2e-tests-statefulset-2ss2d/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 12:26:10.802: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2ss2d May 1 12:26:10.804: INFO: Scaling statefulset ss2 to 0 May 1 12:26:41.224: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:26:41.227: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:26:41.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2ss2d" for this suite. May 1 12:26:47.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:26:47.653: INFO: namespace: e2e-tests-statefulset-2ss2d, resource: bindings, ignored listing per whitelist May 1 12:26:47.677: INFO: namespace e2e-tests-statefulset-2ss2d deletion completed in 6.189868881s • [SLOW TEST:178.965 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:26:47.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:26:47.773: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-r8j5k" to be "success or failure" May 1 12:26:47.776: INFO: Pod "downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601013ms May 1 12:26:49.867: INFO: Pod "downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093798101s May 1 12:26:51.914: INFO: Pod "downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141648173s STEP: Saw pod success May 1 12:26:51.914: INFO: Pod "downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:26:51.918: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:26:52.288: INFO: Waiting for pod downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:26:52.555: INFO: Pod downwardapi-volume-06c49f64-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:26:52.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-r8j5k" for this suite. May 1 12:26:58.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:26:58.722: INFO: namespace: e2e-tests-downward-api-r8j5k, resource: bindings, ignored listing per whitelist May 1 12:26:58.726: INFO: namespace e2e-tests-downward-api-r8j5k deletion completed in 6.134024904s • [SLOW TEST:11.048 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:26:58.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 1 12:26:58.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vx4hb' May 1 12:26:59.078: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 1 12:26:59.078: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 1 12:27:01.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-vx4hb' May 1 12:27:01.777: INFO: stderr: "" May 1 12:27:01.777: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:27:01.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vx4hb" for this suite. May 1 12:27:08.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:27:08.104: INFO: namespace: e2e-tests-kubectl-vx4hb, resource: bindings, ignored listing per whitelist May 1 12:27:08.114: INFO: namespace e2e-tests-kubectl-vx4hb deletion completed in 6.150146576s • [SLOW TEST:9.388 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:27:08.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 1 12:27:08.216: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 1 12:27:08.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:10.950: INFO: stderr: "" May 1 12:27:10.950: INFO: stdout: "service/redis-slave created\n" May 1 12:27:10.950: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 1 12:27:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:11.284: INFO: stderr: "" May 1 12:27:11.284: INFO: stdout: "service/redis-master created\n" May 1 12:27:11.284: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 1 12:27:11.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:11.672: INFO: stderr: "" May 1 12:27:11.672: INFO: stdout: "service/frontend created\n" May 1 12:27:11.672: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 1 12:27:11.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:11.959: INFO: stderr: "" May 1 12:27:11.959: INFO: stdout: "deployment.extensions/frontend created\n" May 1 12:27:11.959: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 1 12:27:11.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:12.250: INFO: stderr: "" May 1 12:27:12.250: INFO: stdout: "deployment.extensions/redis-master created\n" May 1 12:27:12.251: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 1 12:27:12.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:12.558: INFO: stderr: "" May 1 12:27:12.558: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 1 12:27:12.558: INFO: Waiting for all frontend pods to be Running. May 1 12:27:22.608: INFO: Waiting for frontend to serve content. May 1 12:27:22.649: INFO: Trying to add a new entry to the guestbook. May 1 12:27:22.668: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 1 12:27:22.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:22.848: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:22.848: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 1 12:27:22.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:22.991: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:22.991: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 12:27:22.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:23.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:23.145: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 12:27:23.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:23.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:23.278: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 1 12:27:23.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:23.421: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:23.421: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 1 12:27:23.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gsmzj' May 1 12:27:23.955: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:27:23.955: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:27:23.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gsmzj" for this suite. May 1 12:28:04.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:28:04.066: INFO: namespace: e2e-tests-kubectl-gsmzj, resource: bindings, ignored listing per whitelist May 1 12:28:04.076: INFO: namespace e2e-tests-kubectl-gsmzj deletion completed in 40.106974837s • [SLOW TEST:55.962 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:28:04.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r8j57 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 12:28:04.182: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 12:28:32.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostName&protocol=udp&host=10.244.1.159&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-r8j57 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 12:28:32.338: INFO: >>> kubeConfig: /root/.kube/config I0501 12:28:32.363954 6 log.go:172] (0xc0000eafd0) (0xc001a49540) Create stream I0501 12:28:32.363983 6 log.go:172] (0xc0000eafd0) (0xc001a49540) Stream added, broadcasting: 1 I0501 12:28:32.366409 6 log.go:172] (0xc0000eafd0) Reply frame received for 1 I0501 12:28:32.366451 6 log.go:172] (0xc0000eafd0) (0xc001a49720) Create stream I0501 12:28:32.366460 6 log.go:172] (0xc0000eafd0) (0xc001a49720) Stream added, broadcasting: 3 I0501 12:28:32.367276 6 log.go:172] (0xc0000eafd0) Reply frame received for 3 I0501 12:28:32.367306 6 log.go:172] (0xc0000eafd0) (0xc00220b0e0) Create stream I0501 12:28:32.367322 6 log.go:172] (0xc0000eafd0) (0xc00220b0e0) Stream added, broadcasting: 5 I0501 12:28:32.367955 6 log.go:172] (0xc0000eafd0) Reply frame received for 5 I0501 12:28:32.443572 6 log.go:172] (0xc0000eafd0) Data frame received for 3 I0501 12:28:32.443618 6 log.go:172] (0xc001a49720) (3) Data frame handling I0501 12:28:32.443678 6 log.go:172] (0xc001a49720) (3) Data frame sent I0501 12:28:32.443978 6 log.go:172] (0xc0000eafd0) Data frame received for 3 I0501 12:28:32.443999 6 log.go:172] (0xc001a49720) (3) Data frame handling I0501 12:28:32.444242 6 log.go:172] (0xc0000eafd0) Data frame received for 5 I0501 12:28:32.444265 6 log.go:172] (0xc00220b0e0) (5) Data frame handling I0501 12:28:32.445871 6 log.go:172] (0xc0000eafd0) Data frame received for 1 I0501 12:28:32.445888 6 log.go:172] (0xc001a49540) (1) Data frame handling I0501 12:28:32.445908 6 log.go:172] (0xc001a49540) (1) Data frame sent I0501 12:28:32.445924 6 log.go:172] (0xc0000eafd0) (0xc001a49540) Stream removed, broadcasting: 1 I0501 12:28:32.445939 6 log.go:172] (0xc0000eafd0) Go away received I0501 12:28:32.446045 6 log.go:172] (0xc0000eafd0) (0xc001a49540) Stream removed, broadcasting: 1 I0501 12:28:32.446082 6 log.go:172] (0xc0000eafd0) (0xc001a49720) Stream removed, broadcasting: 3 I0501 12:28:32.446102 6 log.go:172] (0xc0000eafd0) (0xc00220b0e0) Stream removed, broadcasting: 5 May 1 12:28:32.446: INFO: Waiting for endpoints: map[] May 1 12:28:32.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.160:8080/dial?request=hostName&protocol=udp&host=10.244.2.197&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-r8j57 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 12:28:32.449: INFO: >>> kubeConfig: /root/.kube/config I0501 12:28:32.479164 6 log.go:172] (0xc0017462c0) (0xc00220b360) Create stream I0501 12:28:32.479198 6 log.go:172] (0xc0017462c0) (0xc00220b360) Stream added, broadcasting: 1 I0501 12:28:32.480952 6 log.go:172] (0xc0017462c0) Reply frame received for 1 I0501 12:28:32.481006 6 log.go:172] (0xc0017462c0) (0xc0019df680) Create stream I0501 12:28:32.481023 6 log.go:172] (0xc0017462c0) (0xc0019df680) Stream added, broadcasting: 3 I0501 12:28:32.482219 6 log.go:172] (0xc0017462c0) Reply frame received for 3 I0501 12:28:32.482269 6 log.go:172] (0xc0017462c0) (0xc001da9ae0) Create stream I0501 12:28:32.482289 6 log.go:172] (0xc0017462c0) (0xc001da9ae0) Stream added, broadcasting: 5 I0501 12:28:32.483314 6 log.go:172] (0xc0017462c0) Reply frame received for 5 I0501 12:28:32.542954 6 log.go:172] (0xc0017462c0) Data frame received for 3 I0501 12:28:32.542985 6 log.go:172] (0xc0019df680) (3) Data frame handling I0501 12:28:32.543006 6 log.go:172] (0xc0019df680) (3) Data frame sent I0501 12:28:32.543879 6 log.go:172] (0xc0017462c0) Data frame received for 3 I0501 12:28:32.543915 6 log.go:172] (0xc0019df680) (3) Data frame handling I0501 12:28:32.543982 6 log.go:172] (0xc0017462c0) Data frame received for 5 I0501 12:28:32.544018 6 log.go:172] (0xc001da9ae0) (5) Data frame handling I0501 12:28:32.546100 6 log.go:172] (0xc0017462c0) Data frame received for 1 I0501 12:28:32.546122 6 log.go:172] (0xc00220b360) (1) Data frame handling I0501 12:28:32.546133 6 log.go:172] (0xc00220b360) (1) Data frame sent I0501 12:28:32.546144 6 log.go:172] (0xc0017462c0) (0xc00220b360) Stream removed, broadcasting: 1 I0501 12:28:32.546254 6 log.go:172] (0xc0017462c0) (0xc00220b360) Stream removed, broadcasting: 1 I0501 12:28:32.546275 6 log.go:172] (0xc0017462c0) (0xc0019df680) Stream removed, broadcasting: 3 I0501 12:28:32.546434 6 log.go:172] (0xc0017462c0) Go away received I0501 12:28:32.546468 6 log.go:172] (0xc0017462c0) (0xc001da9ae0) Stream removed, broadcasting: 5 May 1 12:28:32.546: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:28:32.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-r8j57" for this suite. May 1 12:28:56.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:28:56.812: INFO: namespace: e2e-tests-pod-network-test-r8j57, resource: bindings, ignored listing per whitelist May 1 12:28:56.849: INFO: namespace e2e-tests-pod-network-test-r8j57 deletion completed in 24.285687473s • [SLOW TEST:52.772 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:28:56.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 12:28:57.809: INFO: Waiting up to 5m0s for pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-jgkww" to be "success or failure" May 1 12:28:58.240: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 430.705476ms May 1 12:29:00.300: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491117545s May 1 12:29:02.304: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495132187s May 1 12:29:04.308: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499407631s May 1 12:29:06.313: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 8.503976219s May 1 12:29:08.318: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.508832344s STEP: Saw pod success May 1 12:29:08.318: INFO: Pod "downward-api-53f19855-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:29:08.321: INFO: Trying to get logs from node hunter-worker pod downward-api-53f19855-8ba7-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:29:08.372: INFO: Waiting for pod downward-api-53f19855-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:29:08.407: INFO: Pod downward-api-53f19855-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:29:08.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jgkww" for this suite. May 1 12:29:14.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:29:14.772: INFO: namespace: e2e-tests-downward-api-jgkww, resource: bindings, ignored listing per whitelist May 1 12:29:14.775: INFO: namespace e2e-tests-downward-api-jgkww deletion completed in 6.364630737s • [SLOW TEST:17.926 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:29:14.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:29:15.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-zqpsn" to be "success or failure" May 1 12:29:15.518: INFO: Pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.340543ms May 1 12:29:17.522: INFO: Pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026127505s May 1 12:29:19.617: INFO: Pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120715329s May 1 12:29:21.621: INFO: Pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125418542s STEP: Saw pod success May 1 12:29:21.621: INFO: Pod "downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:29:21.624: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:29:21.726: INFO: Waiting for pod downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:29:21.733: INFO: Pod downwardapi-volume-5ed17068-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:29:21.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zqpsn" for this suite. May 1 12:29:27.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:29:27.804: INFO: namespace: e2e-tests-projected-zqpsn, resource: bindings, ignored listing per whitelist May 1 12:29:27.835: INFO: namespace e2e-tests-projected-zqpsn deletion completed in 6.097912213s • [SLOW TEST:13.060 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:29:27.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-66423d77-8ba7-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume secrets May 1 12:29:27.981: INFO: Waiting up to 5m0s for pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-secrets-6txn4" to be "success or failure" May 1 12:29:28.004: INFO: Pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.535437ms May 1 12:29:30.008: INFO: Pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027064961s May 1 12:29:32.012: INFO: Pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031207427s May 1 12:29:34.017: INFO: Pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035988595s STEP: Saw pod success May 1 12:29:34.017: INFO: Pod "pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:29:34.020: INFO: Trying to get logs from node hunter-worker pod pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017 container secret-volume-test: STEP: delete the pod May 1 12:29:34.253: INFO: Waiting for pod pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:29:34.264: INFO: Pod pod-secrets-6643dcdb-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:29:34.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6txn4" for this suite. May 1 12:29:42.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:29:42.359: INFO: namespace: e2e-tests-secrets-6txn4, resource: bindings, ignored listing per whitelist May 1 12:29:42.375: INFO: namespace e2e-tests-secrets-6txn4 deletion completed in 8.10838351s • [SLOW TEST:14.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:29:42.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:29:50.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-6k7pd" for this suite. May 1 12:29:56.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:29:56.553: INFO: namespace: e2e-tests-namespaces-6k7pd, resource: bindings, ignored listing per whitelist May 1 12:29:56.613: INFO: namespace e2e-tests-namespaces-6k7pd deletion completed in 6.177802031s STEP: Destroying namespace "e2e-tests-nsdeletetest-z9p47" for this suite. May 1 12:29:56.615: INFO: Namespace e2e-tests-nsdeletetest-z9p47 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-pzv54" for this suite. May 1 12:30:02.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:30:02.730: INFO: namespace: e2e-tests-nsdeletetest-pzv54, resource: bindings, ignored listing per whitelist May 1 12:30:02.734: INFO: namespace e2e-tests-nsdeletetest-pzv54 deletion completed in 6.119512446s • [SLOW TEST:20.359 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:30:02.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-95jvn May 1 12:30:06.939: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-95jvn STEP: checking the pod's current state and verifying that restartCount is present May 1 12:30:06.941: INFO: Initial restart count of pod liveness-http is 0 May 1 12:30:26.983: INFO: Restart count of pod e2e-tests-container-probe-95jvn/liveness-http is now 1 (20.041651156s elapsed) May 1 12:30:47.366: INFO: Restart count of pod e2e-tests-container-probe-95jvn/liveness-http is now 2 (40.425121615s elapsed) May 1 12:31:07.808: INFO: Restart count of pod e2e-tests-container-probe-95jvn/liveness-http is now 3 (1m0.866905584s elapsed) May 1 12:31:26.075: INFO: Restart count of pod e2e-tests-container-probe-95jvn/liveness-http is now 4 (1m19.134129275s elapsed) May 1 12:32:30.595: INFO: Restart count of pod e2e-tests-container-probe-95jvn/liveness-http is now 5 (2m23.653845718s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:32:30.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-95jvn" for this suite. May 1 12:32:37.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:32:37.249: INFO: namespace: e2e-tests-container-probe-95jvn, resource: bindings, ignored listing per whitelist May 1 12:32:37.291: INFO: namespace e2e-tests-container-probe-95jvn deletion completed in 6.483824368s • [SLOW TEST:154.557 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:32:37.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:32:41.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ss7zh" for this suite. May 1 12:32:47.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:32:47.927: INFO: namespace: e2e-tests-emptydir-wrapper-ss7zh, resource: bindings, ignored listing per whitelist May 1 12:32:47.940: INFO: namespace e2e-tests-emptydir-wrapper-ss7zh deletion completed in 6.212778663s • [SLOW TEST:10.649 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:32:47.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 12:32:56.661: INFO: Successfully updated pod "annotationupdatedd868d1a-8ba7-11ea-88a3-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:32:59.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ckjnb" for this suite. May 1 12:33:17.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:33:17.211: INFO: namespace: e2e-tests-projected-ckjnb, resource: bindings, ignored listing per whitelist May 1 12:33:17.250: INFO: namespace e2e-tests-projected-ckjnb deletion completed in 18.17193716s • [SLOW TEST:29.310 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:33:17.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-ef212e14-8ba7-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:33:17.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-zvg5k" to be "success or failure" May 1 12:33:17.629: INFO: Pod "pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 13.307065ms May 1 12:33:19.633: INFO: Pod "pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01739703s May 1 12:33:21.638: INFO: Pod "pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02197083s STEP: Saw pod success May 1 12:33:21.638: INFO: Pod "pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:33:21.641: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 12:33:21.754: INFO: Waiting for pod pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:33:21.890: INFO: Pod pod-projected-configmaps-ef23320b-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:33:21.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zvg5k" for this suite. May 1 12:33:27.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:33:27.943: INFO: namespace: e2e-tests-projected-zvg5k, resource: bindings, ignored listing per whitelist May 1 12:33:27.990: INFO: namespace e2e-tests-projected-zvg5k deletion completed in 6.096358669s • [SLOW TEST:10.739 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:33:27.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 1 12:33:28.125: INFO: Waiting up to 5m0s for pod "var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-var-expansion-8jv96" to be "success or failure" May 1 12:33:28.226: INFO: Pod "var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 100.594621ms May 1 12:33:30.230: INFO: Pod "var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10474274s May 1 12:33:32.234: INFO: Pod "var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10923854s STEP: Saw pod success May 1 12:33:32.234: INFO: Pod "var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:33:32.237: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:33:32.479: INFO: Waiting for pod var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:33:32.495: INFO: Pod var-expansion-f5647d56-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:33:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8jv96" for this suite. May 1 12:33:38.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:33:38.559: INFO: namespace: e2e-tests-var-expansion-8jv96, resource: bindings, ignored listing per whitelist May 1 12:33:38.631: INFO: namespace e2e-tests-var-expansion-8jv96 deletion completed in 6.132115867s • [SLOW TEST:10.641 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:33:38.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 12:33:38.742: INFO: Waiting up to 5m0s for pod "downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-vfjcw" to be "success or failure" May 1 12:33:38.746: INFO: Pod "downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.437408ms May 1 12:33:40.751: INFO: Pod "downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008185806s May 1 12:33:42.755: INFO: Pod "downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012850067s STEP: Saw pod success May 1 12:33:42.755: INFO: Pod "downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:33:42.758: INFO: Trying to get logs from node hunter-worker pod downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:33:42.777: INFO: Waiting for pod downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017 to disappear May 1 12:33:42.782: INFO: Pod downward-api-fbb81e87-8ba7-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:33:42.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vfjcw" for this suite. May 1 12:33:48.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:33:48.963: INFO: namespace: e2e-tests-downward-api-vfjcw, resource: bindings, ignored listing per whitelist May 1 12:33:49.002: INFO: namespace e2e-tests-downward-api-vfjcw deletion completed in 6.217755285s • [SLOW TEST:10.370 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:33:49.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 1 12:33:49.113: INFO: namespace e2e-tests-kubectl-7htf4 May 1 12:33:49.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7htf4' May 1 12:33:49.409: INFO: stderr: "" May 1 12:33:49.409: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 1 12:33:50.415: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:50.415: INFO: Found 0 / 1 May 1 12:33:51.414: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:51.414: INFO: Found 0 / 1 May 1 12:33:52.783: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:52.783: INFO: Found 0 / 1 May 1 12:33:53.414: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:53.414: INFO: Found 0 / 1 May 1 12:33:54.565: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:54.565: INFO: Found 0 / 1 May 1 12:33:55.656: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:55.656: INFO: Found 0 / 1 May 1 12:33:56.413: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:56.413: INFO: Found 1 / 1 May 1 12:33:56.413: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 12:33:56.415: INFO: Selector matched 1 pods for map[app:redis] May 1 12:33:56.415: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 12:33:56.415: INFO: wait on redis-master startup in e2e-tests-kubectl-7htf4 May 1 12:33:56.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d22nz redis-master --namespace=e2e-tests-kubectl-7htf4' May 1 12:33:56.553: INFO: stderr: "" May 1 12:33:56.553: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 May 12:33:54.706 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 May 12:33:54.706 # Server started, Redis version 3.2.12\n1:M 01 May 12:33:54.706 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 May 12:33:54.706 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 1 12:33:56.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7htf4' May 1 12:33:56.722: INFO: stderr: "" May 1 12:33:56.722: INFO: stdout: "service/rm2 exposed\n" May 1 12:33:56.758: INFO: Service rm2 in namespace e2e-tests-kubectl-7htf4 found. STEP: exposing service May 1 12:33:58.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7htf4' May 1 12:33:58.966: INFO: stderr: "" May 1 12:33:58.966: INFO: stdout: "service/rm3 exposed\n" May 1 12:33:58.994: INFO: Service rm3 in namespace e2e-tests-kubectl-7htf4 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:34:01.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7htf4" for this suite. May 1 12:34:25.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:34:25.286: INFO: namespace: e2e-tests-kubectl-7htf4, resource: bindings, ignored listing per whitelist May 1 12:34:25.302: INFO: namespace e2e-tests-kubectl-7htf4 deletion completed in 24.298478488s • [SLOW TEST:36.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:34:25.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:34:32.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-rjlxj" for this suite. May 1 12:34:54.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:34:54.662: INFO: namespace: e2e-tests-replication-controller-rjlxj, resource: bindings, ignored listing per whitelist May 1 12:34:54.760: INFO: namespace e2e-tests-replication-controller-rjlxj deletion completed in 22.15329788s • [SLOW TEST:29.458 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:34:54.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:34:54.879: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 1 12:34:54.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4w27c/daemonsets","resourceVersion":"8168153"},"items":null} May 1 12:34:54.886: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4w27c/pods","resourceVersion":"8168153"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:34:54.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4w27c" for this suite. May 1 12:35:00.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:35:00.953: INFO: namespace: e2e-tests-daemonsets-4w27c, resource: bindings, ignored listing per whitelist May 1 12:35:00.982: INFO: namespace e2e-tests-daemonsets-4w27c deletion completed in 6.088007024s S [SKIPPING] [6.222 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:34:54.879: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:35:00.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 1 12:35:01.066: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 12:35:01.082: INFO: Waiting for terminating namespaces to be deleted... May 1 12:35:01.084: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 1 12:35:01.090: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 1 12:35:01.090: INFO: Container kube-proxy ready: true, restart count 0 May 1 12:35:01.090: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 12:35:01.090: INFO: Container kindnet-cni ready: true, restart count 0 May 1 12:35:01.090: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 12:35:01.090: INFO: Container coredns ready: true, restart count 0 May 1 12:35:01.090: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 1 12:35:01.097: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 12:35:01.097: INFO: Container kindnet-cni ready: true, restart count 0 May 1 12:35:01.097: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 1 12:35:01.097: INFO: Container coredns ready: true, restart count 0 May 1 12:35:01.097: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 1 12:35:01.097: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160ae7a1e60868a5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:35:02.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-5sfdp" for this suite. May 1 12:35:10.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:35:10.193: INFO: namespace: e2e-tests-sched-pred-5sfdp, resource: bindings, ignored listing per whitelist May 1 12:35:10.207: INFO: namespace e2e-tests-sched-pred-5sfdp deletion completed in 8.087717246s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.225 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:35:10.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8pr4q STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 12:35:10.751: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 1 12:35:36.942: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.168 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8pr4q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 12:35:36.942: INFO: >>> kubeConfig: /root/.kube/config I0501 12:35:36.977448 6 log.go:172] (0xc0029922c0) (0xc0011edc20) Create stream I0501 12:35:36.977478 6 log.go:172] (0xc0029922c0) (0xc0011edc20) Stream added, broadcasting: 1 I0501 12:35:36.984963 6 log.go:172] (0xc0029922c0) Reply frame received for 1 I0501 12:35:36.985012 6 log.go:172] (0xc0029922c0) (0xc0022b0000) Create stream I0501 12:35:36.985024 6 log.go:172] (0xc0029922c0) (0xc0022b0000) Stream added, broadcasting: 3 I0501 12:35:36.986188 6 log.go:172] (0xc0029922c0) Reply frame received for 3 I0501 12:35:36.986225 6 log.go:172] (0xc0029922c0) (0xc0024b4000) Create stream I0501 12:35:36.986238 6 log.go:172] (0xc0029922c0) (0xc0024b4000) Stream added, broadcasting: 5 I0501 12:35:36.987226 6 log.go:172] (0xc0029922c0) Reply frame received for 5 I0501 12:35:38.080853 6 log.go:172] (0xc0029922c0) Data frame received for 3 I0501 12:35:38.080895 6 log.go:172] (0xc0022b0000) (3) Data frame handling I0501 12:35:38.080912 6 log.go:172] (0xc0022b0000) (3) Data frame sent I0501 12:35:38.080926 6 log.go:172] (0xc0029922c0) Data frame received for 3 I0501 12:35:38.080938 6 log.go:172] (0xc0022b0000) (3) Data frame handling I0501 12:35:38.081377 6 log.go:172] (0xc0029922c0) Data frame received for 5 I0501 12:35:38.081454 6 log.go:172] (0xc0024b4000) (5) Data frame handling I0501 12:35:38.083639 6 log.go:172] (0xc0029922c0) Data frame received for 1 I0501 12:35:38.083675 6 log.go:172] (0xc0011edc20) (1) Data frame handling I0501 12:35:38.083703 6 log.go:172] (0xc0011edc20) (1) Data frame sent I0501 12:35:38.083729 6 log.go:172] (0xc0029922c0) (0xc0011edc20) Stream removed, broadcasting: 1 I0501 12:35:38.083765 6 log.go:172] (0xc0029922c0) Go away received I0501 12:35:38.083867 6 log.go:172] (0xc0029922c0) (0xc0011edc20) Stream removed, broadcasting: 1 I0501 12:35:38.083899 6 log.go:172] (0xc0029922c0) (0xc0022b0000) Stream removed, broadcasting: 3 I0501 12:35:38.083916 6 log.go:172] (0xc0029922c0) (0xc0024b4000) Stream removed, broadcasting: 5 May 1 12:35:38.083: INFO: Found all expected endpoints: [netserver-0] May 1 12:35:38.088: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.202 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8pr4q PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 12:35:38.088: INFO: >>> kubeConfig: /root/.kube/config I0501 12:35:38.114371 6 log.go:172] (0xc000ab0580) (0xc0022b0320) Create stream I0501 12:35:38.114413 6 log.go:172] (0xc000ab0580) (0xc0022b0320) Stream added, broadcasting: 1 I0501 12:35:38.116105 6 log.go:172] (0xc000ab0580) Reply frame received for 1 I0501 12:35:38.116152 6 log.go:172] (0xc000ab0580) (0xc0022b03c0) Create stream I0501 12:35:38.116166 6 log.go:172] (0xc000ab0580) (0xc0022b03c0) Stream added, broadcasting: 3 I0501 12:35:38.117235 6 log.go:172] (0xc000ab0580) Reply frame received for 3 I0501 12:35:38.117287 6 log.go:172] (0xc000ab0580) (0xc0024b40a0) Create stream I0501 12:35:38.117300 6 log.go:172] (0xc000ab0580) (0xc0024b40a0) Stream added, broadcasting: 5 I0501 12:35:38.118293 6 log.go:172] (0xc000ab0580) Reply frame received for 5 I0501 12:35:39.203638 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 12:35:39.203717 6 log.go:172] (0xc0022b03c0) (3) Data frame handling I0501 12:35:39.203767 6 log.go:172] (0xc0022b03c0) (3) Data frame sent I0501 12:35:39.203827 6 log.go:172] (0xc000ab0580) Data frame received for 3 I0501 12:35:39.203854 6 log.go:172] (0xc0022b03c0) (3) Data frame handling I0501 12:35:39.204020 6 log.go:172] (0xc000ab0580) Data frame received for 5 I0501 12:35:39.204085 6 log.go:172] (0xc0024b40a0) (5) Data frame handling I0501 12:35:39.206143 6 log.go:172] (0xc000ab0580) Data frame received for 1 I0501 12:35:39.206162 6 log.go:172] (0xc0022b0320) (1) Data frame handling I0501 12:35:39.206171 6 log.go:172] (0xc0022b0320) (1) Data frame sent I0501 12:35:39.206193 6 log.go:172] (0xc000ab0580) (0xc0022b0320) Stream removed, broadcasting: 1 I0501 12:35:39.206209 6 log.go:172] (0xc000ab0580) Go away received I0501 12:35:39.206365 6 log.go:172] (0xc000ab0580) (0xc0022b0320) Stream removed, broadcasting: 1 I0501 12:35:39.206388 6 log.go:172] (0xc000ab0580) (0xc0022b03c0) Stream removed, broadcasting: 3 I0501 12:35:39.206402 6 log.go:172] (0xc000ab0580) (0xc0024b40a0) Stream removed, broadcasting: 5 May 1 12:35:39.206: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:35:39.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8pr4q" for this suite. May 1 12:36:03.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:36:03.426: INFO: namespace: e2e-tests-pod-network-test-8pr4q, resource: bindings, ignored listing per whitelist May 1 12:36:03.493: INFO: namespace e2e-tests-pod-network-test-8pr4q deletion completed in 24.145878531s • [SLOW TEST:53.286 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:36:03.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:36:03.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-zqx9h" to be "success or failure" May 1 12:36:03.618: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.533618ms May 1 12:36:05.635: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026288863s May 1 12:36:07.638: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03000092s May 1 12:36:09.844: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236108383s May 1 12:36:11.848: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240067908s STEP: Saw pod success May 1 12:36:11.848: INFO: Pod "downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:36:11.851: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:36:11.909: INFO: Waiting for pod downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:36:11.918: INFO: Pod downwardapi-volume-521255d5-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:36:11.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zqx9h" for this suite. May 1 12:36:18.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:36:18.246: INFO: namespace: e2e-tests-projected-zqx9h, resource: bindings, ignored listing per whitelist May 1 12:36:18.283: INFO: namespace e2e-tests-projected-zqx9h deletion completed in 6.361487483s • [SLOW TEST:14.789 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:36:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 1 12:36:18.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 1 12:36:18.761: INFO: stderr: "" May 1 12:36:18.761: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:36:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-js7cn" for this suite. May 1 12:36:24.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:36:24.900: INFO: namespace: e2e-tests-kubectl-js7cn, resource: bindings, ignored listing per whitelist May 1 12:36:24.902: INFO: namespace e2e-tests-kubectl-js7cn deletion completed in 6.13783884s • [SLOW TEST:6.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:36:24.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5ed1df38-8ba8-11ea-88a3-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-5ed1dfd3-8ba8-11ea-88a3-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5ed1df38-8ba8-11ea-88a3-0242ac110017 STEP: Updating configmap cm-test-opt-upd-5ed1dfd3-8ba8-11ea-88a3-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-5ed1e011-8ba8-11ea-88a3-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:36:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l6cgl" for this suite. May 1 12:36:59.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:36:59.253: INFO: namespace: e2e-tests-configmap-l6cgl, resource: bindings, ignored listing per whitelist May 1 12:36:59.268: INFO: namespace e2e-tests-configmap-l6cgl deletion completed in 24.095439413s • [SLOW TEST:34.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:36:59.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 1 12:37:03.984: INFO: Successfully updated pod "labelsupdate735a8ada-8ba8-11ea-88a3-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:37:08.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cppr9" for this suite. May 1 12:37:30.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:37:30.229: INFO: namespace: e2e-tests-downward-api-cppr9, resource: bindings, ignored listing per whitelist May 1 12:37:30.242: INFO: namespace e2e-tests-downward-api-cppr9 deletion completed in 22.085882182s • [SLOW TEST:30.974 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:37:30.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0501 12:37:32.134774 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 12:37:32.134: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:37:32.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p28cr" for this suite. May 1 12:37:40.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:37:40.235: INFO: namespace: e2e-tests-gc-p28cr, resource: bindings, ignored listing per whitelist May 1 12:37:40.256: INFO: namespace e2e-tests-gc-p28cr deletion completed in 8.1193699s • [SLOW TEST:10.014 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:37:40.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 12:37:40.352: INFO: Waiting up to 5m0s for pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-qvj96" to be "success or failure" May 1 12:37:40.379: INFO: Pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 26.692966ms May 1 12:37:42.382: INFO: Pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029665088s May 1 12:37:44.387: INFO: Pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.034176703s May 1 12:37:46.391: INFO: Pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038216568s STEP: Saw pod success May 1 12:37:46.391: INFO: Pod "pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:37:46.393: INFO: Trying to get logs from node hunter-worker pod pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:37:46.508: INFO: Waiting for pod pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:37:46.512: INFO: Pod pod-8bbc7db9-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:37:46.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qvj96" for this suite. May 1 12:37:54.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:37:54.551: INFO: namespace: e2e-tests-emptydir-qvj96, resource: bindings, ignored listing per whitelist May 1 12:37:54.623: INFO: namespace e2e-tests-emptydir-qvj96 deletion completed in 8.108260098s • [SLOW TEST:14.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:37:54.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-94540a31-8ba8-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:37:54.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-fk7ft" to be "success or failure" May 1 12:37:54.784: INFO: Pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.115202ms May 1 12:37:56.798: INFO: Pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031532862s May 1 12:37:58.802: INFO: Pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035439951s May 1 12:38:00.816: INFO: Pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049410482s STEP: Saw pod success May 1 12:38:00.816: INFO: Pod "pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:38:00.818: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 12:38:00.846: INFO: Waiting for pod pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:38:00.848: INFO: Pod pod-configmaps-94549cb4-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:38:00.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fk7ft" for this suite. May 1 12:38:06.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:38:06.910: INFO: namespace: e2e-tests-configmap-fk7ft, resource: bindings, ignored listing per whitelist May 1 12:38:06.986: INFO: namespace e2e-tests-configmap-fk7ft deletion completed in 6.134920371s • [SLOW TEST:12.362 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:38:06.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9bacc176-8ba8-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:38:07.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-pv7lk" to be "success or failure" May 1 12:38:07.124: INFO: Pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.491643ms May 1 12:38:09.167: INFO: Pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047057944s May 1 12:38:11.171: INFO: Pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.051412649s May 1 12:38:13.176: INFO: Pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055512742s STEP: Saw pod success May 1 12:38:13.176: INFO: Pod "pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:38:13.179: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 12:38:13.203: INFO: Waiting for pod pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:38:13.252: INFO: Pod pod-configmaps-9bb26941-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:38:13.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pv7lk" for this suite. May 1 12:38:19.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:38:19.302: INFO: namespace: e2e-tests-configmap-pv7lk, resource: bindings, ignored listing per whitelist May 1 12:38:19.353: INFO: namespace e2e-tests-configmap-pv7lk deletion completed in 6.097178296s • [SLOW TEST:12.367 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:38:19.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 1 12:38:19.587: INFO: Waiting up to 5m0s for pod "pod-a31ee66e-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-emptydir-tw978" to be "success or failure" May 1 12:38:19.591: INFO: Pod "pod-a31ee66e-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.436939ms May 1 12:38:21.594: INFO: Pod "pod-a31ee66e-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007011349s May 1 12:38:23.599: INFO: Pod "pod-a31ee66e-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011377576s STEP: Saw pod success May 1 12:38:23.599: INFO: Pod "pod-a31ee66e-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:38:23.602: INFO: Trying to get logs from node hunter-worker2 pod pod-a31ee66e-8ba8-11ea-88a3-0242ac110017 container test-container: STEP: delete the pod May 1 12:38:23.680: INFO: Waiting for pod pod-a31ee66e-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:38:23.687: INFO: Pod pod-a31ee66e-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:38:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tw978" for this suite. May 1 12:38:29.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:38:29.811: INFO: namespace: e2e-tests-emptydir-tw978, resource: bindings, ignored listing per whitelist May 1 12:38:29.844: INFO: namespace e2e-tests-emptydir-tw978 deletion completed in 6.153855148s • [SLOW TEST:10.491 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:38:29.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:38:29.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-jvcwl" to be "success or failure" May 1 12:38:29.980: INFO: Pod "downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.8586ms May 1 12:38:32.140: INFO: Pod "downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16579858s May 1 12:38:34.146: INFO: Pod "downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171511369s STEP: Saw pod success May 1 12:38:34.146: INFO: Pod "downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:38:34.149: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:38:34.239: INFO: Waiting for pod downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017 to disappear May 1 12:38:34.337: INFO: Pod downwardapi-volume-a94cbd20-8ba8-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:38:34.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jvcwl" for this suite. May 1 12:38:40.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:38:40.478: INFO: namespace: e2e-tests-projected-jvcwl, resource: bindings, ignored listing per whitelist May 1 12:38:40.504: INFO: namespace e2e-tests-projected-jvcwl deletion completed in 6.161397246s • [SLOW TEST:10.659 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:38:40.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-jw8g STEP: Creating a pod to test atomic-volume-subpath May 1 12:38:40.659: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jw8g" in namespace "e2e-tests-subpath-spbqd" to be "success or failure" May 1 12:38:40.684: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Pending", Reason="", readiness=false. Elapsed: 25.489832ms May 1 12:38:42.688: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028724078s May 1 12:38:44.793: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134086816s May 1 12:38:46.797: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138024767s May 1 12:38:48.800: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 8.140723947s May 1 12:38:50.804: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 10.145400621s May 1 12:38:52.809: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 12.150283927s May 1 12:38:54.812: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 14.152872557s May 1 12:38:56.817: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 16.157595685s May 1 12:38:58.820: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 18.161403708s May 1 12:39:00.825: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 20.165858394s May 1 12:39:02.829: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 22.170095083s May 1 12:39:04.832: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 24.173097489s May 1 12:39:06.836: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Running", Reason="", readiness=false. Elapsed: 26.176662222s May 1 12:39:08.840: INFO: Pod "pod-subpath-test-secret-jw8g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.180811474s STEP: Saw pod success May 1 12:39:08.840: INFO: Pod "pod-subpath-test-secret-jw8g" satisfied condition "success or failure" May 1 12:39:08.843: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-jw8g container test-container-subpath-secret-jw8g: STEP: delete the pod May 1 12:39:08.908: INFO: Waiting for pod pod-subpath-test-secret-jw8g to disappear May 1 12:39:08.937: INFO: Pod pod-subpath-test-secret-jw8g no longer exists STEP: Deleting pod pod-subpath-test-secret-jw8g May 1 12:39:08.937: INFO: Deleting pod "pod-subpath-test-secret-jw8g" in namespace "e2e-tests-subpath-spbqd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:39:08.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-spbqd" for this suite. May 1 12:39:14.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:39:14.979: INFO: namespace: e2e-tests-subpath-spbqd, resource: bindings, ignored listing per whitelist May 1 12:39:15.035: INFO: namespace e2e-tests-subpath-spbqd deletion completed in 6.090528633s • [SLOW TEST:34.531 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:39:15.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 1 12:39:15.159: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.032218ms) May 1 12:39:15.161: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.301817ms) May 1 12:39:15.164: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.132181ms) May 1 12:39:15.166: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.479033ms) May 1 12:39:15.169: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.85987ms) May 1 12:39:15.172: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.685673ms) May 1 12:39:15.174: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.797703ms) May 1 12:39:15.177: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.591335ms) May 1 12:39:15.180: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.898742ms) May 1 12:39:15.183: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.237116ms) May 1 12:39:15.186: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.162991ms) May 1 12:39:15.191: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.365031ms) May 1 12:39:15.201: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 10.317401ms) May 1 12:39:15.206: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.474382ms) May 1 12:39:15.208: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.013362ms) May 1 12:39:15.210: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.293691ms) May 1 12:39:15.212: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.715054ms) May 1 12:39:15.214: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.060041ms) May 1 12:39:15.216: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.786703ms) May 1 12:39:15.218: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.181856ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:39:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-w9zcq" for this suite. May 1 12:39:21.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:39:21.342: INFO: namespace: e2e-tests-proxy-w9zcq, resource: bindings, ignored listing per whitelist May 1 12:39:21.354: INFO: namespace e2e-tests-proxy-w9zcq deletion completed in 6.133646141s • [SLOW TEST:6.318 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:39:21.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-qlrcl I0501 12:39:21.478884 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-qlrcl, replica count: 1 I0501 12:39:22.529504 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:39:23.529792 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:39:24.529953 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 12:39:25.530163 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 12:39:25.823: INFO: Created: latency-svc-w2kzn May 1 12:39:25.844: INFO: Got endpoints: latency-svc-w2kzn [214.299048ms] May 1 12:39:25.882: INFO: Created: latency-svc-685z6 May 1 12:39:25.924: INFO: Got endpoints: latency-svc-685z6 [79.964783ms] May 1 12:39:25.954: INFO: Created: latency-svc-xgpv6 May 1 12:39:25.995: INFO: Got endpoints: latency-svc-xgpv6 [150.889731ms] May 1 12:39:26.212: INFO: Created: latency-svc-xnffl May 1 12:39:26.457: INFO: Got endpoints: latency-svc-xnffl [612.73786ms] May 1 12:39:26.488: INFO: Created: latency-svc-rx8dw May 1 12:39:26.679: INFO: Got endpoints: latency-svc-rx8dw [834.958621ms] May 1 12:39:26.722: INFO: Created: latency-svc-ljrkl May 1 12:39:26.762: INFO: Got endpoints: latency-svc-ljrkl [917.39595ms] May 1 12:39:26.932: INFO: Created: latency-svc-nr45q May 1 12:39:26.934: INFO: Got endpoints: latency-svc-nr45q [1.089971321s] May 1 12:39:26.993: INFO: Created: latency-svc-xqxc4 May 1 12:39:27.008: INFO: Got endpoints: latency-svc-xqxc4 [1.163172979s] May 1 12:39:27.087: INFO: Created: latency-svc-2ztd4 May 1 12:39:27.146: INFO: Got endpoints: latency-svc-2ztd4 [1.301849817s] May 1 12:39:27.535: INFO: Created: latency-svc-d9fzk May 1 12:39:27.752: INFO: Got endpoints: latency-svc-d9fzk [1.907189964s] May 1 12:39:27.949: INFO: Created: latency-svc-vmv4w May 1 12:39:27.954: INFO: Got endpoints: latency-svc-vmv4w [2.109919728s] May 1 12:39:28.039: INFO: Created: latency-svc-twkg9 May 1 12:39:28.148: INFO: Got endpoints: latency-svc-twkg9 [2.303807742s] May 1 12:39:28.189: INFO: Created: latency-svc-gtr82 May 1 12:39:28.225: INFO: Got endpoints: latency-svc-gtr82 [2.380698802s] May 1 12:39:28.525: INFO: Created: latency-svc-lklhg May 1 12:39:28.578: INFO: Got endpoints: latency-svc-lklhg [2.734135463s] May 1 12:39:28.698: INFO: Created: latency-svc-vpj7f May 1 12:39:28.735: INFO: Got endpoints: latency-svc-vpj7f [2.890702409s] May 1 12:39:28.788: INFO: Created: latency-svc-8rhgx May 1 12:39:28.843: INFO: Got endpoints: latency-svc-8rhgx [2.998238237s] May 1 12:39:28.910: INFO: Created: latency-svc-klrqg May 1 12:39:28.997: INFO: Got endpoints: latency-svc-klrqg [3.072552707s] May 1 12:39:29.000: INFO: Created: latency-svc-tv96q May 1 12:39:29.016: INFO: Got endpoints: latency-svc-tv96q [3.020924347s] May 1 12:39:29.047: INFO: Created: latency-svc-7kf7n May 1 12:39:29.066: INFO: Got endpoints: latency-svc-7kf7n [2.608298939s] May 1 12:39:29.146: INFO: Created: latency-svc-l7489 May 1 12:39:29.150: INFO: Got endpoints: latency-svc-l7489 [2.470225405s] May 1 12:39:29.199: INFO: Created: latency-svc-s6h87 May 1 12:39:29.235: INFO: Got endpoints: latency-svc-s6h87 [2.47349371s] May 1 12:39:29.296: INFO: Created: latency-svc-7mclv May 1 12:39:29.307: INFO: Got endpoints: latency-svc-7mclv [2.372427056s] May 1 12:39:29.336: INFO: Created: latency-svc-gwftt May 1 12:39:29.355: INFO: Got endpoints: latency-svc-gwftt [2.347579502s] May 1 12:39:29.378: INFO: Created: latency-svc-fkrcm May 1 12:39:29.391: INFO: Got endpoints: latency-svc-fkrcm [2.244987929s] May 1 12:39:29.446: INFO: Created: latency-svc-wmd42 May 1 12:39:29.452: INFO: Got endpoints: latency-svc-wmd42 [1.700498351s] May 1 12:39:29.479: INFO: Created: latency-svc-tqc8z May 1 12:39:29.494: INFO: Got endpoints: latency-svc-tqc8z [1.539601248s] May 1 12:39:29.522: INFO: Created: latency-svc-5x76w May 1 12:39:29.536: INFO: Got endpoints: latency-svc-5x76w [1.387735922s] May 1 12:39:29.596: INFO: Created: latency-svc-2jntc May 1 12:39:29.604: INFO: Got endpoints: latency-svc-2jntc [1.379048943s] May 1 12:39:29.630: INFO: Created: latency-svc-l4h2d May 1 12:39:29.644: INFO: Got endpoints: latency-svc-l4h2d [1.065914286s] May 1 12:39:29.666: INFO: Created: latency-svc-9jvtw May 1 12:39:29.675: INFO: Got endpoints: latency-svc-9jvtw [939.23522ms] May 1 12:39:29.763: INFO: Created: latency-svc-5fqgj May 1 12:39:29.766: INFO: Got endpoints: latency-svc-5fqgj [922.778973ms] May 1 12:39:29.925: INFO: Created: latency-svc-4gstb May 1 12:39:29.930: INFO: Got endpoints: latency-svc-4gstb [932.779232ms] May 1 12:39:30.080: INFO: Created: latency-svc-cvbgl May 1 12:39:30.089: INFO: Got endpoints: latency-svc-cvbgl [1.072758059s] May 1 12:39:30.108: INFO: Created: latency-svc-kmqhm May 1 12:39:30.138: INFO: Got endpoints: latency-svc-kmqhm [1.072537889s] May 1 12:39:30.174: INFO: Created: latency-svc-5jvmp May 1 12:39:30.254: INFO: Got endpoints: latency-svc-5jvmp [1.104056852s] May 1 12:39:30.279: INFO: Created: latency-svc-4mc7s May 1 12:39:30.293: INFO: Got endpoints: latency-svc-4mc7s [1.057995914s] May 1 12:39:30.319: INFO: Created: latency-svc-swgrf May 1 12:39:30.348: INFO: Got endpoints: latency-svc-swgrf [1.041072127s] May 1 12:39:30.440: INFO: Created: latency-svc-px8vf May 1 12:39:30.442: INFO: Got endpoints: latency-svc-px8vf [1.086640911s] May 1 12:39:30.475: INFO: Created: latency-svc-8l7ns May 1 12:39:30.493: INFO: Got endpoints: latency-svc-8l7ns [1.101714757s] May 1 12:39:30.523: INFO: Created: latency-svc-cwx5p May 1 12:39:30.589: INFO: Got endpoints: latency-svc-cwx5p [1.136673282s] May 1 12:39:30.592: INFO: Created: latency-svc-45t2f May 1 12:39:30.601: INFO: Got endpoints: latency-svc-45t2f [1.10636994s] May 1 12:39:30.624: INFO: Created: latency-svc-cpwns May 1 12:39:30.637: INFO: Got endpoints: latency-svc-cpwns [1.10090093s] May 1 12:39:30.674: INFO: Created: latency-svc-s767n May 1 12:39:30.769: INFO: Got endpoints: latency-svc-s767n [1.164379657s] May 1 12:39:30.771: INFO: Created: latency-svc-25f4p May 1 12:39:30.776: INFO: Got endpoints: latency-svc-25f4p [1.131207172s] May 1 12:39:30.840: INFO: Created: latency-svc-6v74j May 1 12:39:30.912: INFO: Got endpoints: latency-svc-6v74j [1.237835801s] May 1 12:39:30.974: INFO: Created: latency-svc-h2t24 May 1 12:39:30.986: INFO: Got endpoints: latency-svc-h2t24 [1.219987177s] May 1 12:39:31.075: INFO: Created: latency-svc-xfmpp May 1 12:39:31.078: INFO: Got endpoints: latency-svc-xfmpp [1.147789172s] May 1 12:39:31.110: INFO: Created: latency-svc-prft5 May 1 12:39:31.146: INFO: Got endpoints: latency-svc-prft5 [1.057125165s] May 1 12:39:31.224: INFO: Created: latency-svc-vbgdl May 1 12:39:31.232: INFO: Got endpoints: latency-svc-vbgdl [1.093618876s] May 1 12:39:31.278: INFO: Created: latency-svc-8xjzj May 1 12:39:31.298: INFO: Got endpoints: latency-svc-8xjzj [1.044241095s] May 1 12:39:31.404: INFO: Created: latency-svc-d4qg4 May 1 12:39:31.407: INFO: Got endpoints: latency-svc-d4qg4 [1.113102177s] May 1 12:39:31.440: INFO: Created: latency-svc-dx9gp May 1 12:39:31.462: INFO: Got endpoints: latency-svc-dx9gp [1.113640239s] May 1 12:39:31.477: INFO: Created: latency-svc-45jds May 1 12:39:31.491: INFO: Got endpoints: latency-svc-45jds [1.048675137s] May 1 12:39:31.554: INFO: Created: latency-svc-h942z May 1 12:39:31.556: INFO: Got endpoints: latency-svc-h942z [1.062558637s] May 1 12:39:31.585: INFO: Created: latency-svc-bj7hx May 1 12:39:31.600: INFO: Got endpoints: latency-svc-bj7hx [1.011091111s] May 1 12:39:31.627: INFO: Created: latency-svc-s9zhl May 1 12:39:31.651: INFO: Got endpoints: latency-svc-s9zhl [1.050256454s] May 1 12:39:31.706: INFO: Created: latency-svc-8rjxk May 1 12:39:31.737: INFO: Got endpoints: latency-svc-8rjxk [1.099580137s] May 1 12:39:31.765: INFO: Created: latency-svc-849t5 May 1 12:39:31.794: INFO: Got endpoints: latency-svc-849t5 [1.02527648s] May 1 12:39:31.855: INFO: Created: latency-svc-ddzkk May 1 12:39:31.876: INFO: Got endpoints: latency-svc-ddzkk [1.100803924s] May 1 12:39:31.898: INFO: Created: latency-svc-tbj46 May 1 12:39:31.906: INFO: Got endpoints: latency-svc-tbj46 [993.881715ms] May 1 12:39:31.927: INFO: Created: latency-svc-dcnpv May 1 12:39:32.038: INFO: Got endpoints: latency-svc-dcnpv [1.052624355s] May 1 12:39:32.065: INFO: Created: latency-svc-pgqv6 May 1 12:39:32.095: INFO: Got endpoints: latency-svc-pgqv6 [1.017040097s] May 1 12:39:32.126: INFO: Created: latency-svc-k94x4 May 1 12:39:32.135: INFO: Got endpoints: latency-svc-k94x4 [988.940271ms] May 1 12:39:32.192: INFO: Created: latency-svc-7dnn9 May 1 12:39:32.207: INFO: Got endpoints: latency-svc-7dnn9 [975.532617ms] May 1 12:39:32.239: INFO: Created: latency-svc-7jn2c May 1 12:39:32.262: INFO: Got endpoints: latency-svc-7jn2c [964.079272ms] May 1 12:39:32.434: INFO: Created: latency-svc-tldwt May 1 12:39:32.479: INFO: Got endpoints: latency-svc-tldwt [1.072492964s] May 1 12:39:32.480: INFO: Created: latency-svc-6cz7p May 1 12:39:32.496: INFO: Got endpoints: latency-svc-6cz7p [1.03402772s] May 1 12:39:32.514: INFO: Created: latency-svc-hkfcf May 1 12:39:32.589: INFO: Got endpoints: latency-svc-hkfcf [1.09855657s] May 1 12:39:32.594: INFO: Created: latency-svc-hk8pd May 1 12:39:32.610: INFO: Got endpoints: latency-svc-hk8pd [1.054617985s] May 1 12:39:32.635: INFO: Created: latency-svc-vwsvj May 1 12:39:32.653: INFO: Got endpoints: latency-svc-vwsvj [1.052355956s] May 1 12:39:32.682: INFO: Created: latency-svc-vnr5n May 1 12:39:32.811: INFO: Got endpoints: latency-svc-vnr5n [1.159505412s] May 1 12:39:33.087: INFO: Created: latency-svc-47z5s May 1 12:39:33.150: INFO: Got endpoints: latency-svc-47z5s [1.413624523s] May 1 12:39:33.314: INFO: Created: latency-svc-sl795 May 1 12:39:33.331: INFO: Got endpoints: latency-svc-sl795 [1.536914218s] May 1 12:39:33.548: INFO: Created: latency-svc-7n95s May 1 12:39:33.553: INFO: Got endpoints: latency-svc-7n95s [1.676240419s] May 1 12:39:33.721: INFO: Created: latency-svc-dkltf May 1 12:39:33.764: INFO: Got endpoints: latency-svc-dkltf [1.857218236s] May 1 12:39:33.877: INFO: Created: latency-svc-zvxgh May 1 12:39:33.880: INFO: Got endpoints: latency-svc-zvxgh [1.841362501s] May 1 12:39:34.093: INFO: Created: latency-svc-8rc89 May 1 12:39:34.097: INFO: Got endpoints: latency-svc-8rc89 [2.002515346s] May 1 12:39:34.161: INFO: Created: latency-svc-992jh May 1 12:39:34.177: INFO: Got endpoints: latency-svc-992jh [2.04209567s] May 1 12:39:34.242: INFO: Created: latency-svc-zlj8g May 1 12:39:34.245: INFO: Got endpoints: latency-svc-zlj8g [2.037170074s] May 1 12:39:34.292: INFO: Created: latency-svc-m8qgz May 1 12:39:34.309: INFO: Got endpoints: latency-svc-m8qgz [2.046445539s] May 1 12:39:34.334: INFO: Created: latency-svc-t692f May 1 12:39:34.415: INFO: Got endpoints: latency-svc-t692f [1.936197211s] May 1 12:39:34.442: INFO: Created: latency-svc-znhhc May 1 12:39:34.459: INFO: Got endpoints: latency-svc-znhhc [1.96283493s] May 1 12:39:34.495: INFO: Created: latency-svc-n2ldw May 1 12:39:34.507: INFO: Got endpoints: latency-svc-n2ldw [1.918124122s] May 1 12:39:34.590: INFO: Created: latency-svc-26zr8 May 1 12:39:34.593: INFO: Got endpoints: latency-svc-26zr8 [1.982714045s] May 1 12:39:34.622: INFO: Created: latency-svc-fg9dc May 1 12:39:34.640: INFO: Got endpoints: latency-svc-fg9dc [1.987621452s] May 1 12:39:34.658: INFO: Created: latency-svc-j57l5 May 1 12:39:34.781: INFO: Got endpoints: latency-svc-j57l5 [1.970712951s] May 1 12:39:34.795: INFO: Created: latency-svc-69vr6 May 1 12:39:34.808: INFO: Got endpoints: latency-svc-69vr6 [1.657854517s] May 1 12:39:34.832: INFO: Created: latency-svc-t4gt7 May 1 12:39:34.850: INFO: Got endpoints: latency-svc-t4gt7 [1.519263237s] May 1 12:39:34.960: INFO: Created: latency-svc-hpjtw May 1 12:39:35.042: INFO: Got endpoints: latency-svc-hpjtw [1.489521512s] May 1 12:39:35.128: INFO: Created: latency-svc-bn6vj May 1 12:39:35.135: INFO: Got endpoints: latency-svc-bn6vj [1.370957552s] May 1 12:39:35.161: INFO: Created: latency-svc-b8r7l May 1 12:39:35.177: INFO: Got endpoints: latency-svc-b8r7l [1.29724366s] May 1 12:39:35.214: INFO: Created: latency-svc-gbxm2 May 1 12:39:35.226: INFO: Got endpoints: latency-svc-gbxm2 [1.128435145s] May 1 12:39:35.286: INFO: Created: latency-svc-485p2 May 1 12:39:35.298: INFO: Got endpoints: latency-svc-485p2 [1.120271811s] May 1 12:39:35.335: INFO: Created: latency-svc-wn7vm May 1 12:39:35.358: INFO: Got endpoints: latency-svc-wn7vm [1.113378445s] May 1 12:39:35.439: INFO: Created: latency-svc-xpfl8 May 1 12:39:35.443: INFO: Got endpoints: latency-svc-xpfl8 [1.134178869s] May 1 12:39:35.443: INFO: Created: latency-svc-2ln2w May 1 12:39:35.462: INFO: Got endpoints: latency-svc-2ln2w [1.046550804s] May 1 12:39:35.498: INFO: Created: latency-svc-h8h8l May 1 12:39:35.533: INFO: Got endpoints: latency-svc-h8h8l [1.073695503s] May 1 12:39:35.601: INFO: Created: latency-svc-qhw77 May 1 12:39:35.619: INFO: Got endpoints: latency-svc-qhw77 [1.111053845s] May 1 12:39:35.673: INFO: Created: latency-svc-bq2qw May 1 12:39:35.690: INFO: Got endpoints: latency-svc-bq2qw [1.097133292s] May 1 12:39:35.800: INFO: Created: latency-svc-nt2dx May 1 12:39:35.803: INFO: Got endpoints: latency-svc-nt2dx [1.162388441s] May 1 12:39:35.845: INFO: Created: latency-svc-b4qhh May 1 12:39:35.888: INFO: Created: latency-svc-c7c75 May 1 12:39:35.960: INFO: Got endpoints: latency-svc-b4qhh [1.178180343s] May 1 12:39:35.960: INFO: Created: latency-svc-tcfjx May 1 12:39:35.974: INFO: Got endpoints: latency-svc-tcfjx [1.123172409s] May 1 12:39:36.025: INFO: Got endpoints: latency-svc-c7c75 [1.216496384s] May 1 12:39:36.025: INFO: Created: latency-svc-hhfs4 May 1 12:39:36.068: INFO: Got endpoints: latency-svc-hhfs4 [1.025953006s] May 1 12:39:36.091: INFO: Created: latency-svc-xr7jf May 1 12:39:36.106: INFO: Got endpoints: latency-svc-xr7jf [971.433487ms] May 1 12:39:36.126: INFO: Created: latency-svc-g87zs May 1 12:39:36.157: INFO: Got endpoints: latency-svc-g87zs [980.228295ms] May 1 12:39:36.219: INFO: Created: latency-svc-fq6ls May 1 12:39:36.251: INFO: Got endpoints: latency-svc-fq6ls [1.025059421s] May 1 12:39:36.294: INFO: Created: latency-svc-765gl May 1 12:39:36.311: INFO: Got endpoints: latency-svc-765gl [1.013594705s] May 1 12:39:36.374: INFO: Created: latency-svc-c5p2v May 1 12:39:36.383: INFO: Got endpoints: latency-svc-c5p2v [1.025114314s] May 1 12:39:36.408: INFO: Created: latency-svc-8vnrk May 1 12:39:36.432: INFO: Got endpoints: latency-svc-8vnrk [988.965716ms] May 1 12:39:36.470: INFO: Created: latency-svc-2frl5 May 1 12:39:36.523: INFO: Got endpoints: latency-svc-2frl5 [1.061132727s] May 1 12:39:36.530: INFO: Created: latency-svc-qd2xm May 1 12:39:36.571: INFO: Got endpoints: latency-svc-qd2xm [1.038441981s] May 1 12:39:36.612: INFO: Created: latency-svc-wvbwr May 1 12:39:36.619: INFO: Got endpoints: latency-svc-wvbwr [1.000844521s] May 1 12:39:36.673: INFO: Created: latency-svc-gplwk May 1 12:39:36.680: INFO: Got endpoints: latency-svc-gplwk [989.18869ms] May 1 12:39:36.740: INFO: Created: latency-svc-ktbgl May 1 12:39:36.752: INFO: Got endpoints: latency-svc-ktbgl [948.886723ms] May 1 12:39:36.877: INFO: Created: latency-svc-wrddn May 1 12:39:36.880: INFO: Got endpoints: latency-svc-wrddn [920.02179ms] May 1 12:39:37.045: INFO: Created: latency-svc-5t6gv May 1 12:39:37.064: INFO: Got endpoints: latency-svc-5t6gv [1.090580996s] May 1 12:39:37.087: INFO: Created: latency-svc-zv95g May 1 12:39:37.101: INFO: Got endpoints: latency-svc-zv95g [1.075544258s] May 1 12:39:37.140: INFO: Created: latency-svc-dvjqr May 1 12:39:37.194: INFO: Got endpoints: latency-svc-dvjqr [1.125546401s] May 1 12:39:37.212: INFO: Created: latency-svc-zrvvs May 1 12:39:37.227: INFO: Got endpoints: latency-svc-zrvvs [1.121021815s] May 1 12:39:37.285: INFO: Created: latency-svc-t8jq4 May 1 12:39:37.343: INFO: Got endpoints: latency-svc-t8jq4 [1.185964978s] May 1 12:39:37.368: INFO: Created: latency-svc-gmm66 May 1 12:39:37.379: INFO: Got endpoints: latency-svc-gmm66 [1.127790284s] May 1 12:39:37.411: INFO: Created: latency-svc-6qc84 May 1 12:39:37.427: INFO: Got endpoints: latency-svc-6qc84 [1.115242077s] May 1 12:39:37.506: INFO: Created: latency-svc-cqvrz May 1 12:39:37.508: INFO: Got endpoints: latency-svc-cqvrz [1.124345894s] May 1 12:39:37.542: INFO: Created: latency-svc-crjfm May 1 12:39:37.579: INFO: Got endpoints: latency-svc-crjfm [1.146425782s] May 1 12:39:37.668: INFO: Created: latency-svc-9nkd4 May 1 12:39:37.671: INFO: Got endpoints: latency-svc-9nkd4 [1.147676926s] May 1 12:39:37.705: INFO: Created: latency-svc-pxsnk May 1 12:39:37.746: INFO: Got endpoints: latency-svc-pxsnk [1.175369566s] May 1 12:39:38.172: INFO: Created: latency-svc-4ln64 May 1 12:39:38.471: INFO: Got endpoints: latency-svc-4ln64 [1.851191644s] May 1 12:39:38.502: INFO: Created: latency-svc-qs8zj May 1 12:39:38.539: INFO: Got endpoints: latency-svc-qs8zj [1.859458319s] May 1 12:39:38.646: INFO: Created: latency-svc-ff25b May 1 12:39:38.682: INFO: Got endpoints: latency-svc-ff25b [1.930389503s] May 1 12:39:38.931: INFO: Created: latency-svc-w7phz May 1 12:39:38.935: INFO: Got endpoints: latency-svc-w7phz [2.055208356s] May 1 12:39:39.199: INFO: Created: latency-svc-f8hc6 May 1 12:39:39.246: INFO: Got endpoints: latency-svc-f8hc6 [2.18147811s] May 1 12:39:39.384: INFO: Created: latency-svc-9wxlj May 1 12:39:39.396: INFO: Got endpoints: latency-svc-9wxlj [2.294943655s] May 1 12:39:39.470: INFO: Created: latency-svc-bhjvk May 1 12:39:39.480: INFO: Got endpoints: latency-svc-bhjvk [2.286001941s] May 1 12:39:39.535: INFO: Created: latency-svc-crwqc May 1 12:39:39.631: INFO: Got endpoints: latency-svc-crwqc [2.403989923s] May 1 12:39:39.633: INFO: Created: latency-svc-tgdts May 1 12:39:39.643: INFO: Got endpoints: latency-svc-tgdts [2.299233888s] May 1 12:39:39.668: INFO: Created: latency-svc-sgdlh May 1 12:39:39.679: INFO: Got endpoints: latency-svc-sgdlh [2.30016112s] May 1 12:39:39.709: INFO: Created: latency-svc-977rt May 1 12:39:39.722: INFO: Got endpoints: latency-svc-977rt [2.29504829s] May 1 12:39:39.800: INFO: Created: latency-svc-6pwmv May 1 12:39:39.806: INFO: Got endpoints: latency-svc-6pwmv [2.298402419s] May 1 12:39:39.829: INFO: Created: latency-svc-rq26h May 1 12:39:39.848: INFO: Got endpoints: latency-svc-rq26h [2.26962536s] May 1 12:39:39.889: INFO: Created: latency-svc-bq24z May 1 12:39:39.955: INFO: Got endpoints: latency-svc-bq24z [2.283607477s] May 1 12:39:39.978: INFO: Created: latency-svc-228wd May 1 12:39:39.999: INFO: Got endpoints: latency-svc-228wd [2.252711573s] May 1 12:39:40.029: INFO: Created: latency-svc-bkk5x May 1 12:39:40.041: INFO: Got endpoints: latency-svc-bkk5x [1.570240003s] May 1 12:39:40.104: INFO: Created: latency-svc-2rltb May 1 12:39:40.114: INFO: Got endpoints: latency-svc-2rltb [1.574503618s] May 1 12:39:40.147: INFO: Created: latency-svc-nrx96 May 1 12:39:40.162: INFO: Got endpoints: latency-svc-nrx96 [1.480025803s] May 1 12:39:40.188: INFO: Created: latency-svc-rrgv7 May 1 12:39:40.290: INFO: Got endpoints: latency-svc-rrgv7 [1.3548318s] May 1 12:39:40.310: INFO: Created: latency-svc-5c7hk May 1 12:39:40.313: INFO: Got endpoints: latency-svc-5c7hk [1.067464168s] May 1 12:39:40.350: INFO: Created: latency-svc-w5q5m May 1 12:39:40.361: INFO: Got endpoints: latency-svc-w5q5m [965.880447ms] May 1 12:39:40.434: INFO: Created: latency-svc-qdn7c May 1 12:39:40.440: INFO: Got endpoints: latency-svc-qdn7c [959.637724ms] May 1 12:39:40.460: INFO: Created: latency-svc-c5z9n May 1 12:39:40.464: INFO: Got endpoints: latency-svc-c5z9n [832.440426ms] May 1 12:39:40.571: INFO: Created: latency-svc-h5bsq May 1 12:39:40.575: INFO: Got endpoints: latency-svc-h5bsq [931.783217ms] May 1 12:39:40.610: INFO: Created: latency-svc-wc4kl May 1 12:39:40.615: INFO: Got endpoints: latency-svc-wc4kl [935.815767ms] May 1 12:39:40.639: INFO: Created: latency-svc-g7t9d May 1 12:39:40.664: INFO: Got endpoints: latency-svc-g7t9d [941.800943ms] May 1 12:39:40.734: INFO: Created: latency-svc-r7m4z May 1 12:39:40.736: INFO: Got endpoints: latency-svc-r7m4z [929.530204ms] May 1 12:39:40.826: INFO: Created: latency-svc-f6kdf May 1 12:39:40.870: INFO: Got endpoints: latency-svc-f6kdf [1.022117644s] May 1 12:39:40.898: INFO: Created: latency-svc-2xjhs May 1 12:39:40.910: INFO: Got endpoints: latency-svc-2xjhs [955.452398ms] May 1 12:39:40.939: INFO: Created: latency-svc-6wf68 May 1 12:39:40.958: INFO: Got endpoints: latency-svc-6wf68 [959.083173ms] May 1 12:39:41.022: INFO: Created: latency-svc-rcdfk May 1 12:39:41.061: INFO: Got endpoints: latency-svc-rcdfk [1.019971129s] May 1 12:39:41.164: INFO: Created: latency-svc-kcwl6 May 1 12:39:41.175: INFO: Got endpoints: latency-svc-kcwl6 [1.061686326s] May 1 12:39:41.198: INFO: Created: latency-svc-5qkp2 May 1 12:39:41.207: INFO: Got endpoints: latency-svc-5qkp2 [1.044738367s] May 1 12:39:41.263: INFO: Created: latency-svc-jg9bv May 1 12:39:41.326: INFO: Got endpoints: latency-svc-jg9bv [1.035620424s] May 1 12:39:41.353: INFO: Created: latency-svc-898gj May 1 12:39:41.368: INFO: Got endpoints: latency-svc-898gj [1.055094187s] May 1 12:39:41.401: INFO: Created: latency-svc-sc42w May 1 12:39:41.415: INFO: Got endpoints: latency-svc-sc42w [1.053930847s] May 1 12:39:41.499: INFO: Created: latency-svc-5n7c5 May 1 12:39:41.499: INFO: Got endpoints: latency-svc-5n7c5 [1.058909351s] May 1 12:39:41.527: INFO: Created: latency-svc-l25s8 May 1 12:39:41.542: INFO: Got endpoints: latency-svc-l25s8 [1.078587401s] May 1 12:39:41.568: INFO: Created: latency-svc-c2xmp May 1 12:39:41.586: INFO: Got endpoints: latency-svc-c2xmp [1.011021694s] May 1 12:39:41.659: INFO: Created: latency-svc-jwslm May 1 12:39:41.683: INFO: Got endpoints: latency-svc-jwslm [1.067859181s] May 1 12:39:41.683: INFO: Created: latency-svc-fpgg7 May 1 12:39:41.700: INFO: Got endpoints: latency-svc-fpgg7 [1.036013735s] May 1 12:39:41.730: INFO: Created: latency-svc-bgf7q May 1 12:39:41.823: INFO: Got endpoints: latency-svc-bgf7q [1.087229511s] May 1 12:39:41.844: INFO: Created: latency-svc-zfmlr May 1 12:39:41.862: INFO: Got endpoints: latency-svc-zfmlr [991.99156ms] May 1 12:39:41.887: INFO: Created: latency-svc-htr8p May 1 12:39:41.905: INFO: Got endpoints: latency-svc-htr8p [994.767625ms] May 1 12:39:41.967: INFO: Created: latency-svc-np89t May 1 12:39:41.970: INFO: Got endpoints: latency-svc-np89t [1.011118402s] May 1 12:39:42.012: INFO: Created: latency-svc-6g8z5 May 1 12:39:42.035: INFO: Got endpoints: latency-svc-6g8z5 [974.05907ms] May 1 12:39:42.114: INFO: Created: latency-svc-7pq9p May 1 12:39:42.128: INFO: Got endpoints: latency-svc-7pq9p [952.114189ms] May 1 12:39:42.150: INFO: Created: latency-svc-hjfcs May 1 12:39:42.164: INFO: Got endpoints: latency-svc-hjfcs [957.014161ms] May 1 12:39:42.187: INFO: Created: latency-svc-xwccz May 1 12:39:42.202: INFO: Got endpoints: latency-svc-xwccz [876.369356ms] May 1 12:39:42.272: INFO: Created: latency-svc-7hxr8 May 1 12:39:42.275: INFO: Got endpoints: latency-svc-7hxr8 [906.495095ms] May 1 12:39:42.331: INFO: Created: latency-svc-pmc7d May 1 12:39:42.360: INFO: Got endpoints: latency-svc-pmc7d [944.737928ms] May 1 12:39:42.427: INFO: Created: latency-svc-5l829 May 1 12:39:42.442: INFO: Got endpoints: latency-svc-5l829 [943.185667ms] May 1 12:39:42.463: INFO: Created: latency-svc-vhhdf May 1 12:39:42.478: INFO: Got endpoints: latency-svc-vhhdf [935.828127ms] May 1 12:39:42.510: INFO: Created: latency-svc-97gqw May 1 12:39:42.559: INFO: Got endpoints: latency-svc-97gqw [973.183441ms] May 1 12:39:42.576: INFO: Created: latency-svc-2vjzd May 1 12:39:42.587: INFO: Got endpoints: latency-svc-2vjzd [904.022841ms] May 1 12:39:42.612: INFO: Created: latency-svc-s5stz May 1 12:39:42.623: INFO: Got endpoints: latency-svc-s5stz [923.710696ms] May 1 12:39:42.659: INFO: Created: latency-svc-9hwms May 1 12:39:42.756: INFO: Got endpoints: latency-svc-9hwms [932.855907ms] May 1 12:39:42.758: INFO: Created: latency-svc-zxc85 May 1 12:39:42.793: INFO: Got endpoints: latency-svc-zxc85 [930.610817ms] May 1 12:39:43.039: INFO: Created: latency-svc-zn9px May 1 12:39:43.140: INFO: Got endpoints: latency-svc-zn9px [1.235134375s] May 1 12:39:43.145: INFO: Created: latency-svc-zrncs May 1 12:39:43.152: INFO: Got endpoints: latency-svc-zrncs [1.182659216s] May 1 12:39:43.195: INFO: Created: latency-svc-8tn4t May 1 12:39:43.219: INFO: Got endpoints: latency-svc-8tn4t [1.183872035s] May 1 12:39:43.279: INFO: Created: latency-svc-tb2t8 May 1 12:39:43.320: INFO: Got endpoints: latency-svc-tb2t8 [1.192461571s] May 1 12:39:43.321: INFO: Created: latency-svc-h8zbk May 1 12:39:43.356: INFO: Got endpoints: latency-svc-h8zbk [1.192074366s] May 1 12:39:43.417: INFO: Created: latency-svc-7fdpg May 1 12:39:43.419: INFO: Got endpoints: latency-svc-7fdpg [1.216576132s] May 1 12:39:43.464: INFO: Created: latency-svc-qxn4p May 1 12:39:43.485: INFO: Got endpoints: latency-svc-qxn4p [1.209810276s] May 1 12:39:43.584: INFO: Created: latency-svc-r6jbk May 1 12:39:43.586: INFO: Got endpoints: latency-svc-r6jbk [1.225747454s] May 1 12:39:43.620: INFO: Created: latency-svc-qtzj5 May 1 12:39:43.668: INFO: Got endpoints: latency-svc-qtzj5 [1.225697841s] May 1 12:39:43.769: INFO: Created: latency-svc-9752w May 1 12:39:43.773: INFO: Got endpoints: latency-svc-9752w [1.295051848s] May 1 12:39:43.814: INFO: Created: latency-svc-qm526 May 1 12:39:43.828: INFO: Got endpoints: latency-svc-qm526 [1.269050138s] May 1 12:39:43.848: INFO: Created: latency-svc-9p9jf May 1 12:39:43.858: INFO: Got endpoints: latency-svc-9p9jf [1.271146145s] May 1 12:39:43.919: INFO: Created: latency-svc-lrwfh May 1 12:39:43.922: INFO: Got endpoints: latency-svc-lrwfh [1.298237416s] May 1 12:39:43.969: INFO: Created: latency-svc-g4nl2 May 1 12:39:43.985: INFO: Got endpoints: latency-svc-g4nl2 [1.228767506s] May 1 12:39:44.004: INFO: Created: latency-svc-sskcx May 1 12:39:44.062: INFO: Got endpoints: latency-svc-sskcx [1.269097314s] May 1 12:39:44.070: INFO: Created: latency-svc-ffxqk May 1 12:39:44.088: INFO: Got endpoints: latency-svc-ffxqk [947.570814ms] May 1 12:39:44.088: INFO: Latencies: [79.964783ms 150.889731ms 612.73786ms 832.440426ms 834.958621ms 876.369356ms 904.022841ms 906.495095ms 917.39595ms 920.02179ms 922.778973ms 923.710696ms 929.530204ms 930.610817ms 931.783217ms 932.779232ms 932.855907ms 935.815767ms 935.828127ms 939.23522ms 941.800943ms 943.185667ms 944.737928ms 947.570814ms 948.886723ms 952.114189ms 955.452398ms 957.014161ms 959.083173ms 959.637724ms 964.079272ms 965.880447ms 971.433487ms 973.183441ms 974.05907ms 975.532617ms 980.228295ms 988.940271ms 988.965716ms 989.18869ms 991.99156ms 993.881715ms 994.767625ms 1.000844521s 1.011021694s 1.011091111s 1.011118402s 1.013594705s 1.017040097s 1.019971129s 1.022117644s 1.025059421s 1.025114314s 1.02527648s 1.025953006s 1.03402772s 1.035620424s 1.036013735s 1.038441981s 1.041072127s 1.044241095s 1.044738367s 1.046550804s 1.048675137s 1.050256454s 1.052355956s 1.052624355s 1.053930847s 1.054617985s 1.055094187s 1.057125165s 1.057995914s 1.058909351s 1.061132727s 1.061686326s 1.062558637s 1.065914286s 1.067464168s 1.067859181s 1.072492964s 1.072537889s 1.072758059s 1.073695503s 1.075544258s 1.078587401s 1.086640911s 1.087229511s 1.089971321s 1.090580996s 1.093618876s 1.097133292s 1.09855657s 1.099580137s 1.100803924s 1.10090093s 1.101714757s 1.104056852s 1.10636994s 1.111053845s 1.113102177s 1.113378445s 1.113640239s 1.115242077s 1.120271811s 1.121021815s 1.123172409s 1.124345894s 1.125546401s 1.127790284s 1.128435145s 1.131207172s 1.134178869s 1.136673282s 1.146425782s 1.147676926s 1.147789172s 1.159505412s 1.162388441s 1.163172979s 1.164379657s 1.175369566s 1.178180343s 1.182659216s 1.183872035s 1.185964978s 1.192074366s 1.192461571s 1.209810276s 1.216496384s 1.216576132s 1.219987177s 1.225697841s 1.225747454s 1.228767506s 1.235134375s 1.237835801s 1.269050138s 1.269097314s 1.271146145s 1.295051848s 1.29724366s 1.298237416s 1.301849817s 1.3548318s 1.370957552s 1.379048943s 1.387735922s 1.413624523s 1.480025803s 1.489521512s 1.519263237s 1.536914218s 1.539601248s 1.570240003s 1.574503618s 1.657854517s 1.676240419s 1.700498351s 1.841362501s 1.851191644s 1.857218236s 1.859458319s 1.907189964s 1.918124122s 1.930389503s 1.936197211s 1.96283493s 1.970712951s 1.982714045s 1.987621452s 2.002515346s 2.037170074s 2.04209567s 2.046445539s 2.055208356s 2.109919728s 2.18147811s 2.244987929s 2.252711573s 2.26962536s 2.283607477s 2.286001941s 2.294943655s 2.29504829s 2.298402419s 2.299233888s 2.30016112s 2.303807742s 2.347579502s 2.372427056s 2.380698802s 2.403989923s 2.470225405s 2.47349371s 2.608298939s 2.734135463s 2.890702409s 2.998238237s 3.020924347s 3.072552707s] May 1 12:39:44.088: INFO: 50 %ile: 1.113378445s May 1 12:39:44.088: INFO: 90 %ile: 2.283607477s May 1 12:39:44.088: INFO: 99 %ile: 3.020924347s May 1 12:39:44.088: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:39:44.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-qlrcl" for this suite. May 1 12:40:18.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:40:18.116: INFO: namespace: e2e-tests-svc-latency-qlrcl, resource: bindings, ignored listing per whitelist May 1 12:40:18.181: INFO: namespace e2e-tests-svc-latency-qlrcl deletion completed in 34.087016889s • [SLOW TEST:56.828 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:40:18.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:41:18.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qddr4" for this suite. May 1 12:41:40.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:41:40.367: INFO: namespace: e2e-tests-container-probe-qddr4, resource: bindings, ignored listing per whitelist May 1 12:41:40.399: INFO: namespace e2e-tests-container-probe-qddr4 deletion completed in 22.0808317s • [SLOW TEST:82.218 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:41:40.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017 May 1 12:41:40.582: INFO: Pod name my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017: Found 0 pods out of 1 May 1 12:41:45.587: INFO: Pod name my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017: Found 1 pods out of 1 May 1 12:41:45.587: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017" are running May 1 12:41:45.590: INFO: Pod "my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017-2msl5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:41:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:41:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:41:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 12:41:40 +0000 UTC Reason: Message:}]) May 1 12:41:45.590: INFO: Trying to dial the pod May 1 12:41:50.599: INFO: Controller my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017: Got expected result from replica 1 [my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017-2msl5]: "my-hostname-basic-1ae66172-8ba9-11ea-88a3-0242ac110017-2msl5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:41:50.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-d9j5g" for this suite. May 1 12:41:58.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:41:58.656: INFO: namespace: e2e-tests-replication-controller-d9j5g, resource: bindings, ignored listing per whitelist May 1 12:41:58.686: INFO: namespace e2e-tests-replication-controller-d9j5g deletion completed in 8.083214356s • [SLOW TEST:18.286 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:41:58.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0501 12:42:39.255869 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 12:42:39.255: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:42:39.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qbdtp" for this suite. May 1 12:42:51.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:42:51.312: INFO: namespace: e2e-tests-gc-qbdtp, resource: bindings, ignored listing per whitelist May 1 12:42:51.337: INFO: namespace e2e-tests-gc-qbdtp deletion completed in 12.079371999s • [SLOW TEST:52.651 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:42:51.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-5hrw STEP: Creating a pod to test atomic-volume-subpath May 1 12:42:52.067: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5hrw" in namespace "e2e-tests-subpath-p224r" to be "success or failure" May 1 12:42:52.208: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 140.835442ms May 1 12:42:54.211: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144335119s May 1 12:42:56.215: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148520934s May 1 12:42:58.220: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15278058s May 1 12:43:00.224: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 8.157289294s May 1 12:43:02.228: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 10.161465631s May 1 12:43:04.232: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 12.165639913s May 1 12:43:06.237: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 14.170592033s May 1 12:43:08.611: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 16.544078835s May 1 12:43:10.616: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 18.549763272s May 1 12:43:12.869: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 20.802186245s May 1 12:43:14.872: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 22.805213517s May 1 12:43:16.876: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Running", Reason="", readiness=false. Elapsed: 24.808990761s May 1 12:43:18.880: INFO: Pod "pod-subpath-test-downwardapi-5hrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.813428177s STEP: Saw pod success May 1 12:43:18.880: INFO: Pod "pod-subpath-test-downwardapi-5hrw" satisfied condition "success or failure" May 1 12:43:18.884: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-5hrw container test-container-subpath-downwardapi-5hrw: STEP: delete the pod May 1 12:43:18.976: INFO: Waiting for pod pod-subpath-test-downwardapi-5hrw to disappear May 1 12:43:19.088: INFO: Pod pod-subpath-test-downwardapi-5hrw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5hrw May 1 12:43:19.088: INFO: Deleting pod "pod-subpath-test-downwardapi-5hrw" in namespace "e2e-tests-subpath-p224r" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:43:19.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-p224r" for this suite. May 1 12:43:25.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:43:25.314: INFO: namespace: e2e-tests-subpath-p224r, resource: bindings, ignored listing per whitelist May 1 12:43:25.370: INFO: namespace e2e-tests-subpath-p224r deletion completed in 6.276729907s • [SLOW TEST:34.032 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:43:25.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 1 12:43:25.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:30.249: INFO: stderr: "" May 1 12:43:30.249: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 12:43:30.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:30.362: INFO: stderr: "" May 1 12:43:30.362: INFO: stdout: "update-demo-nautilus-4prgw update-demo-nautilus-cqlxk " May 1 12:43:30.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:30.478: INFO: stderr: "" May 1 12:43:30.478: INFO: stdout: "" May 1 12:43:30.478: INFO: update-demo-nautilus-4prgw is created but not running May 1 12:43:35.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:35.588: INFO: stderr: "" May 1 12:43:35.588: INFO: stdout: "update-demo-nautilus-4prgw update-demo-nautilus-cqlxk " May 1 12:43:35.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:35.692: INFO: stderr: "" May 1 12:43:35.692: INFO: stdout: "true" May 1 12:43:35.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:35.791: INFO: stderr: "" May 1 12:43:35.792: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:35.792: INFO: validating pod update-demo-nautilus-4prgw May 1 12:43:35.796: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:35.796: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:35.796: INFO: update-demo-nautilus-4prgw is verified up and running May 1 12:43:35.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqlxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:35.900: INFO: stderr: "" May 1 12:43:35.900: INFO: stdout: "true" May 1 12:43:35.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cqlxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:36.007: INFO: stderr: "" May 1 12:43:36.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:36.007: INFO: validating pod update-demo-nautilus-cqlxk May 1 12:43:36.011: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:36.011: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:36.011: INFO: update-demo-nautilus-cqlxk is verified up and running STEP: scaling down the replication controller May 1 12:43:36.013: INFO: scanned /root for discovery docs: May 1 12:43:36.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:37.147: INFO: stderr: "" May 1 12:43:37.148: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 12:43:37.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:37.262: INFO: stderr: "" May 1 12:43:37.262: INFO: stdout: "update-demo-nautilus-4prgw update-demo-nautilus-cqlxk " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 12:43:42.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:42.356: INFO: stderr: "" May 1 12:43:42.356: INFO: stdout: "update-demo-nautilus-4prgw " May 1 12:43:42.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:42.443: INFO: stderr: "" May 1 12:43:42.443: INFO: stdout: "true" May 1 12:43:42.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:42.534: INFO: stderr: "" May 1 12:43:42.534: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:42.534: INFO: validating pod update-demo-nautilus-4prgw May 1 12:43:42.537: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:42.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:42.537: INFO: update-demo-nautilus-4prgw is verified up and running STEP: scaling up the replication controller May 1 12:43:42.539: INFO: scanned /root for discovery docs: May 1 12:43:42.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:43.687: INFO: stderr: "" May 1 12:43:43.687: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 12:43:43.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:43.789: INFO: stderr: "" May 1 12:43:43.789: INFO: stdout: "update-demo-nautilus-4prgw update-demo-nautilus-qbncr " May 1 12:43:43.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:43.886: INFO: stderr: "" May 1 12:43:43.886: INFO: stdout: "true" May 1 12:43:43.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:43.998: INFO: stderr: "" May 1 12:43:43.998: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:43.998: INFO: validating pod update-demo-nautilus-4prgw May 1 12:43:44.043: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:44.043: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:44.043: INFO: update-demo-nautilus-4prgw is verified up and running May 1 12:43:44.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbncr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:44.131: INFO: stderr: "" May 1 12:43:44.131: INFO: stdout: "" May 1 12:43:44.131: INFO: update-demo-nautilus-qbncr is created but not running May 1 12:43:49.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.249: INFO: stderr: "" May 1 12:43:49.249: INFO: stdout: "update-demo-nautilus-4prgw update-demo-nautilus-qbncr " May 1 12:43:49.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.353: INFO: stderr: "" May 1 12:43:49.353: INFO: stdout: "true" May 1 12:43:49.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4prgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.462: INFO: stderr: "" May 1 12:43:49.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:49.462: INFO: validating pod update-demo-nautilus-4prgw May 1 12:43:49.466: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:49.466: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:49.466: INFO: update-demo-nautilus-4prgw is verified up and running May 1 12:43:49.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbncr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.560: INFO: stderr: "" May 1 12:43:49.560: INFO: stdout: "true" May 1 12:43:49.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbncr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.660: INFO: stderr: "" May 1 12:43:49.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 12:43:49.660: INFO: validating pod update-demo-nautilus-qbncr May 1 12:43:49.664: INFO: got data: { "image": "nautilus.jpg" } May 1 12:43:49.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 12:43:49.664: INFO: update-demo-nautilus-qbncr is verified up and running STEP: using delete to clean up resources May 1 12:43:49.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.789: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:43:49.789: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 12:43:49.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-q8fjq' May 1 12:43:49.900: INFO: stderr: "No resources found.\n" May 1 12:43:49.900: INFO: stdout: "" May 1 12:43:49.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-q8fjq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 12:43:50.016: INFO: stderr: "" May 1 12:43:50.016: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:43:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q8fjq" for this suite. May 1 12:44:12.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:44:12.141: INFO: namespace: e2e-tests-kubectl-q8fjq, resource: bindings, ignored listing per whitelist May 1 12:44:12.201: INFO: namespace e2e-tests-kubectl-q8fjq deletion completed in 22.18105411s • [SLOW TEST:46.831 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:44:12.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dg7pm [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dg7pm STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dg7pm May 1 12:44:12.341: INFO: Found 0 stateful pods, waiting for 1 May 1 12:44:22.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 1 12:44:22.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:44:22.609: INFO: stderr: "I0501 12:44:22.472607 2852 log.go:172] (0xc0007144d0) (0xc000679360) Create stream\nI0501 12:44:22.472684 2852 log.go:172] (0xc0007144d0) (0xc000679360) Stream added, broadcasting: 1\nI0501 12:44:22.475201 2852 log.go:172] (0xc0007144d0) Reply frame received for 1\nI0501 12:44:22.475241 2852 log.go:172] (0xc0007144d0) (0xc00002c000) Create stream\nI0501 12:44:22.475253 2852 log.go:172] (0xc0007144d0) (0xc00002c000) Stream added, broadcasting: 3\nI0501 12:44:22.476227 2852 log.go:172] (0xc0007144d0) Reply frame received for 3\nI0501 12:44:22.476262 2852 log.go:172] (0xc0007144d0) (0xc000679400) Create stream\nI0501 12:44:22.476274 2852 log.go:172] (0xc0007144d0) (0xc000679400) Stream added, broadcasting: 5\nI0501 12:44:22.477774 2852 log.go:172] (0xc0007144d0) Reply frame received for 5\nI0501 12:44:22.602256 2852 log.go:172] (0xc0007144d0) Data frame received for 3\nI0501 12:44:22.602315 2852 log.go:172] (0xc00002c000) (3) Data frame handling\nI0501 12:44:22.602357 2852 log.go:172] (0xc00002c000) (3) Data frame sent\nI0501 12:44:22.602687 2852 log.go:172] (0xc0007144d0) Data frame received for 3\nI0501 12:44:22.602727 2852 log.go:172] (0xc00002c000) (3) Data frame handling\nI0501 12:44:22.602933 2852 log.go:172] (0xc0007144d0) Data frame received for 5\nI0501 12:44:22.602956 2852 log.go:172] (0xc000679400) (5) Data frame handling\nI0501 12:44:22.604590 2852 log.go:172] (0xc0007144d0) Data frame received for 1\nI0501 12:44:22.604610 2852 log.go:172] (0xc000679360) (1) Data frame handling\nI0501 12:44:22.604620 2852 log.go:172] (0xc000679360) (1) Data frame sent\nI0501 12:44:22.604737 2852 log.go:172] (0xc0007144d0) (0xc000679360) Stream removed, broadcasting: 1\nI0501 12:44:22.604912 2852 log.go:172] (0xc0007144d0) (0xc000679360) Stream removed, broadcasting: 1\nI0501 12:44:22.604932 2852 log.go:172] (0xc0007144d0) (0xc00002c000) Stream removed, broadcasting: 3\nI0501 12:44:22.604944 2852 log.go:172] (0xc0007144d0) (0xc000679400) Stream removed, broadcasting: 5\n" May 1 12:44:22.609: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:44:22.609: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 12:44:22.613: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 12:44:32.618: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 12:44:32.618: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:44:32.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999611s May 1 12:44:33.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978267347s May 1 12:44:34.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.791435024s May 1 12:44:35.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.787451212s May 1 12:44:36.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.782082241s May 1 12:44:37.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.777124819s May 1 12:44:38.971: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.772620398s May 1 12:44:39.983: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.653937531s May 1 12:44:40.987: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.642146405s May 1 12:44:41.991: INFO: Verifying statefulset ss doesn't scale past 1 for another 637.759629ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dg7pm May 1 12:44:43.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:44:43.215: INFO: stderr: "I0501 12:44:43.150819 2875 log.go:172] (0xc000702420) (0xc00072a640) Create stream\nI0501 12:44:43.150891 2875 log.go:172] (0xc000702420) (0xc00072a640) Stream added, broadcasting: 1\nI0501 12:44:43.153832 2875 log.go:172] (0xc000702420) Reply frame received for 1\nI0501 12:44:43.153876 2875 log.go:172] (0xc000702420) (0xc000494be0) Create stream\nI0501 12:44:43.153886 2875 log.go:172] (0xc000702420) (0xc000494be0) Stream added, broadcasting: 3\nI0501 12:44:43.154879 2875 log.go:172] (0xc000702420) Reply frame received for 3\nI0501 12:44:43.154908 2875 log.go:172] (0xc000702420) (0xc00072a6e0) Create stream\nI0501 12:44:43.154918 2875 log.go:172] (0xc000702420) (0xc00072a6e0) Stream added, broadcasting: 5\nI0501 12:44:43.155673 2875 log.go:172] (0xc000702420) Reply frame received for 5\nI0501 12:44:43.208875 2875 log.go:172] (0xc000702420) Data frame received for 3\nI0501 12:44:43.208896 2875 log.go:172] (0xc000494be0) (3) Data frame handling\nI0501 12:44:43.208919 2875 log.go:172] (0xc000702420) Data frame received for 5\nI0501 12:44:43.208942 2875 log.go:172] (0xc00072a6e0) (5) Data frame handling\nI0501 12:44:43.208960 2875 log.go:172] (0xc000494be0) (3) Data frame sent\nI0501 12:44:43.208966 2875 log.go:172] (0xc000702420) Data frame received for 3\nI0501 12:44:43.208974 2875 log.go:172] (0xc000494be0) (3) Data frame handling\nI0501 12:44:43.210792 2875 log.go:172] (0xc000702420) Data frame received for 1\nI0501 12:44:43.210815 2875 log.go:172] (0xc00072a640) (1) Data frame handling\nI0501 12:44:43.210828 2875 log.go:172] (0xc00072a640) (1) Data frame sent\nI0501 12:44:43.210852 2875 log.go:172] (0xc000702420) (0xc00072a640) Stream removed, broadcasting: 1\nI0501 12:44:43.210878 2875 log.go:172] (0xc000702420) Go away received\nI0501 12:44:43.211187 2875 log.go:172] (0xc000702420) (0xc00072a640) Stream removed, broadcasting: 1\nI0501 12:44:43.211225 2875 log.go:172] (0xc000702420) (0xc000494be0) Stream removed, broadcasting: 3\nI0501 12:44:43.211251 2875 log.go:172] (0xc000702420) (0xc00072a6e0) Stream removed, broadcasting: 5\n" May 1 12:44:43.215: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:44:43.215: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:44:43.218: INFO: Found 1 stateful pods, waiting for 3 May 1 12:44:53.223: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:44:53.223: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:44:53.223: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 1 12:44:53.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:44:53.490: INFO: stderr: "I0501 12:44:53.362937 2898 log.go:172] (0xc00013a840) (0xc0007a6640) Create stream\nI0501 12:44:53.362996 2898 log.go:172] (0xc00013a840) (0xc0007a6640) Stream added, broadcasting: 1\nI0501 12:44:53.366244 2898 log.go:172] (0xc00013a840) Reply frame received for 1\nI0501 12:44:53.366305 2898 log.go:172] (0xc00013a840) (0xc000688b40) Create stream\nI0501 12:44:53.366325 2898 log.go:172] (0xc00013a840) (0xc000688b40) Stream added, broadcasting: 3\nI0501 12:44:53.367446 2898 log.go:172] (0xc00013a840) Reply frame received for 3\nI0501 12:44:53.367485 2898 log.go:172] (0xc00013a840) (0xc00069e000) Create stream\nI0501 12:44:53.367495 2898 log.go:172] (0xc00013a840) (0xc00069e000) Stream added, broadcasting: 5\nI0501 12:44:53.368409 2898 log.go:172] (0xc00013a840) Reply frame received for 5\nI0501 12:44:53.483397 2898 log.go:172] (0xc00013a840) Data frame received for 3\nI0501 12:44:53.483449 2898 log.go:172] (0xc000688b40) (3) Data frame handling\nI0501 12:44:53.483486 2898 log.go:172] (0xc000688b40) (3) Data frame sent\nI0501 12:44:53.483513 2898 log.go:172] (0xc00013a840) Data frame received for 3\nI0501 12:44:53.483528 2898 log.go:172] (0xc000688b40) (3) Data frame handling\nI0501 12:44:53.483581 2898 log.go:172] (0xc00013a840) Data frame received for 5\nI0501 12:44:53.483634 2898 log.go:172] (0xc00069e000) (5) Data frame handling\nI0501 12:44:53.485108 2898 log.go:172] (0xc00013a840) Data frame received for 1\nI0501 12:44:53.485367 2898 log.go:172] (0xc0007a6640) (1) Data frame handling\nI0501 12:44:53.485399 2898 log.go:172] (0xc0007a6640) (1) Data frame sent\nI0501 12:44:53.485489 2898 log.go:172] (0xc00013a840) (0xc0007a6640) Stream removed, broadcasting: 1\nI0501 12:44:53.485577 2898 log.go:172] (0xc00013a840) Go away received\nI0501 12:44:53.485695 2898 log.go:172] (0xc00013a840) (0xc0007a6640) Stream removed, broadcasting: 1\nI0501 12:44:53.485721 2898 log.go:172] (0xc00013a840) (0xc000688b40) Stream removed, broadcasting: 3\nI0501 12:44:53.485734 2898 log.go:172] (0xc00013a840) (0xc00069e000) Stream removed, broadcasting: 5\n" May 1 12:44:53.490: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:44:53.490: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 12:44:53.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:44:53.765: INFO: stderr: "I0501 12:44:53.657998 2921 log.go:172] (0xc0006b4210) (0xc00072a5a0) Create stream\nI0501 12:44:53.658051 2921 log.go:172] (0xc0006b4210) (0xc00072a5a0) Stream added, broadcasting: 1\nI0501 12:44:53.660513 2921 log.go:172] (0xc0006b4210) Reply frame received for 1\nI0501 12:44:53.660557 2921 log.go:172] (0xc0006b4210) (0xc00072a640) Create stream\nI0501 12:44:53.660571 2921 log.go:172] (0xc0006b4210) (0xc00072a640) Stream added, broadcasting: 3\nI0501 12:44:53.662241 2921 log.go:172] (0xc0006b4210) Reply frame received for 3\nI0501 12:44:53.662266 2921 log.go:172] (0xc0006b4210) (0xc000124d20) Create stream\nI0501 12:44:53.662275 2921 log.go:172] (0xc0006b4210) (0xc000124d20) Stream added, broadcasting: 5\nI0501 12:44:53.663529 2921 log.go:172] (0xc0006b4210) Reply frame received for 5\nI0501 12:44:53.758277 2921 log.go:172] (0xc0006b4210) Data frame received for 3\nI0501 12:44:53.758322 2921 log.go:172] (0xc00072a640) (3) Data frame handling\nI0501 12:44:53.758358 2921 log.go:172] (0xc00072a640) (3) Data frame sent\nI0501 12:44:53.758376 2921 log.go:172] (0xc0006b4210) Data frame received for 3\nI0501 12:44:53.758390 2921 log.go:172] (0xc00072a640) (3) Data frame handling\nI0501 12:44:53.758597 2921 log.go:172] (0xc0006b4210) Data frame received for 5\nI0501 12:44:53.758628 2921 log.go:172] (0xc000124d20) (5) Data frame handling\nI0501 12:44:53.760686 2921 log.go:172] (0xc0006b4210) Data frame received for 1\nI0501 12:44:53.760723 2921 log.go:172] (0xc00072a5a0) (1) Data frame handling\nI0501 12:44:53.760756 2921 log.go:172] (0xc00072a5a0) (1) Data frame sent\nI0501 12:44:53.760795 2921 log.go:172] (0xc0006b4210) (0xc00072a5a0) Stream removed, broadcasting: 1\nI0501 12:44:53.760816 2921 log.go:172] (0xc0006b4210) Go away received\nI0501 12:44:53.761090 2921 log.go:172] (0xc0006b4210) (0xc00072a5a0) Stream removed, broadcasting: 1\nI0501 12:44:53.761327 2921 log.go:172] (0xc0006b4210) (0xc00072a640) Stream removed, broadcasting: 3\nI0501 12:44:53.761359 2921 log.go:172] (0xc0006b4210) (0xc000124d20) Stream removed, broadcasting: 5\n" May 1 12:44:53.765: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:44:53.765: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 12:44:53.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 1 12:44:54.017: INFO: stderr: "I0501 12:44:53.892165 2944 log.go:172] (0xc000138790) (0xc000736640) Create stream\nI0501 12:44:53.892236 2944 log.go:172] (0xc000138790) (0xc000736640) Stream added, broadcasting: 1\nI0501 12:44:53.894981 2944 log.go:172] (0xc000138790) Reply frame received for 1\nI0501 12:44:53.895030 2944 log.go:172] (0xc000138790) (0xc0006a0c80) Create stream\nI0501 12:44:53.895042 2944 log.go:172] (0xc000138790) (0xc0006a0c80) Stream added, broadcasting: 3\nI0501 12:44:53.895946 2944 log.go:172] (0xc000138790) Reply frame received for 3\nI0501 12:44:53.895989 2944 log.go:172] (0xc000138790) (0xc0007366e0) Create stream\nI0501 12:44:53.896003 2944 log.go:172] (0xc000138790) (0xc0007366e0) Stream added, broadcasting: 5\nI0501 12:44:53.896858 2944 log.go:172] (0xc000138790) Reply frame received for 5\nI0501 12:44:54.010111 2944 log.go:172] (0xc000138790) Data frame received for 3\nI0501 12:44:54.010167 2944 log.go:172] (0xc0006a0c80) (3) Data frame handling\nI0501 12:44:54.010220 2944 log.go:172] (0xc0006a0c80) (3) Data frame sent\nI0501 12:44:54.010677 2944 log.go:172] (0xc000138790) Data frame received for 5\nI0501 12:44:54.010704 2944 log.go:172] (0xc0007366e0) (5) Data frame handling\nI0501 12:44:54.010786 2944 log.go:172] (0xc000138790) Data frame received for 3\nI0501 12:44:54.010825 2944 log.go:172] (0xc0006a0c80) (3) Data frame handling\nI0501 12:44:54.012618 2944 log.go:172] (0xc000138790) Data frame received for 1\nI0501 12:44:54.012645 2944 log.go:172] (0xc000736640) (1) Data frame handling\nI0501 12:44:54.012657 2944 log.go:172] (0xc000736640) (1) Data frame sent\nI0501 12:44:54.012671 2944 log.go:172] (0xc000138790) (0xc000736640) Stream removed, broadcasting: 1\nI0501 12:44:54.012688 2944 log.go:172] (0xc000138790) Go away received\nI0501 12:44:54.012917 2944 log.go:172] (0xc000138790) (0xc000736640) Stream removed, broadcasting: 1\nI0501 12:44:54.012942 2944 log.go:172] (0xc000138790) (0xc0006a0c80) Stream removed, broadcasting: 3\nI0501 12:44:54.012956 2944 log.go:172] (0xc000138790) (0xc0007366e0) Stream removed, broadcasting: 5\n" May 1 12:44:54.017: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 1 12:44:54.017: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 1 12:44:54.017: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:44:54.021: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 1 12:45:04.041: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 12:45:04.041: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 12:45:04.041: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 12:45:04.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999395s May 1 12:45:05.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993527862s May 1 12:45:06.079: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97467492s May 1 12:45:07.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969161603s May 1 12:45:08.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964297421s May 1 12:45:09.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958645038s May 1 12:45:10.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953305342s May 1 12:45:11.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948136588s May 1 12:45:12.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94083095s May 1 12:45:13.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.078091ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dg7pm May 1 12:45:14.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:45:14.377: INFO: stderr: "I0501 12:45:14.269588 2966 log.go:172] (0xc00014c840) (0xc00076e640) Create stream\nI0501 12:45:14.269657 2966 log.go:172] (0xc00014c840) (0xc00076e640) Stream added, broadcasting: 1\nI0501 12:45:14.272336 2966 log.go:172] (0xc00014c840) Reply frame received for 1\nI0501 12:45:14.272382 2966 log.go:172] (0xc00014c840) (0xc00067cbe0) Create stream\nI0501 12:45:14.272401 2966 log.go:172] (0xc00014c840) (0xc00067cbe0) Stream added, broadcasting: 3\nI0501 12:45:14.273661 2966 log.go:172] (0xc00014c840) Reply frame received for 3\nI0501 12:45:14.273731 2966 log.go:172] (0xc00014c840) (0xc00067cd20) Create stream\nI0501 12:45:14.273758 2966 log.go:172] (0xc00014c840) (0xc00067cd20) Stream added, broadcasting: 5\nI0501 12:45:14.274821 2966 log.go:172] (0xc00014c840) Reply frame received for 5\nI0501 12:45:14.370805 2966 log.go:172] (0xc00014c840) Data frame received for 5\nI0501 12:45:14.370848 2966 log.go:172] (0xc00014c840) Data frame received for 3\nI0501 12:45:14.370891 2966 log.go:172] (0xc00067cbe0) (3) Data frame handling\nI0501 12:45:14.370913 2966 log.go:172] (0xc00067cbe0) (3) Data frame sent\nI0501 12:45:14.370930 2966 log.go:172] (0xc00014c840) Data frame received for 3\nI0501 12:45:14.370951 2966 log.go:172] (0xc00067cbe0) (3) Data frame handling\nI0501 12:45:14.370973 2966 log.go:172] (0xc00067cd20) (5) Data frame handling\nI0501 12:45:14.372323 2966 log.go:172] (0xc00014c840) Data frame received for 1\nI0501 12:45:14.372350 2966 log.go:172] (0xc00076e640) (1) Data frame handling\nI0501 12:45:14.372367 2966 log.go:172] (0xc00076e640) (1) Data frame sent\nI0501 12:45:14.372389 2966 log.go:172] (0xc00014c840) (0xc00076e640) Stream removed, broadcasting: 1\nI0501 12:45:14.372420 2966 log.go:172] (0xc00014c840) Go away received\nI0501 12:45:14.372632 2966 log.go:172] (0xc00014c840) (0xc00076e640) Stream removed, broadcasting: 1\nI0501 12:45:14.372661 2966 log.go:172] (0xc00014c840) (0xc00067cbe0) Stream removed, broadcasting: 3\nI0501 12:45:14.372682 2966 log.go:172] (0xc00014c840) (0xc00067cd20) Stream removed, broadcasting: 5\n" May 1 12:45:14.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:45:14.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:45:14.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:45:14.577: INFO: stderr: "I0501 12:45:14.505481 2988 log.go:172] (0xc000138840) (0xc000756640) Create stream\nI0501 12:45:14.505532 2988 log.go:172] (0xc000138840) (0xc000756640) Stream added, broadcasting: 1\nI0501 12:45:14.507542 2988 log.go:172] (0xc000138840) Reply frame received for 1\nI0501 12:45:14.507590 2988 log.go:172] (0xc000138840) (0xc0005eebe0) Create stream\nI0501 12:45:14.507607 2988 log.go:172] (0xc000138840) (0xc0005eebe0) Stream added, broadcasting: 3\nI0501 12:45:14.508363 2988 log.go:172] (0xc000138840) Reply frame received for 3\nI0501 12:45:14.508391 2988 log.go:172] (0xc000138840) (0xc0005be000) Create stream\nI0501 12:45:14.508398 2988 log.go:172] (0xc000138840) (0xc0005be000) Stream added, broadcasting: 5\nI0501 12:45:14.509221 2988 log.go:172] (0xc000138840) Reply frame received for 5\nI0501 12:45:14.572435 2988 log.go:172] (0xc000138840) Data frame received for 3\nI0501 12:45:14.572495 2988 log.go:172] (0xc0005eebe0) (3) Data frame handling\nI0501 12:45:14.572515 2988 log.go:172] (0xc0005eebe0) (3) Data frame sent\nI0501 12:45:14.572533 2988 log.go:172] (0xc000138840) Data frame received for 3\nI0501 12:45:14.572544 2988 log.go:172] (0xc0005eebe0) (3) Data frame handling\nI0501 12:45:14.572590 2988 log.go:172] (0xc000138840) Data frame received for 5\nI0501 12:45:14.572621 2988 log.go:172] (0xc0005be000) (5) Data frame handling\nI0501 12:45:14.574021 2988 log.go:172] (0xc000138840) Data frame received for 1\nI0501 12:45:14.574039 2988 log.go:172] (0xc000756640) (1) Data frame handling\nI0501 12:45:14.574075 2988 log.go:172] (0xc000756640) (1) Data frame sent\nI0501 12:45:14.574102 2988 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0501 12:45:14.574120 2988 log.go:172] (0xc000138840) Go away received\nI0501 12:45:14.574327 2988 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0501 12:45:14.574348 2988 log.go:172] (0xc000138840) (0xc0005eebe0) Stream removed, broadcasting: 3\nI0501 12:45:14.574364 2988 log.go:172] (0xc000138840) (0xc0005be000) Stream removed, broadcasting: 5\n" May 1 12:45:14.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:45:14.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:45:14.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dg7pm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 1 12:45:14.783: INFO: stderr: "I0501 12:45:14.703346 3011 log.go:172] (0xc0006d0420) (0xc00069d4a0) Create stream\nI0501 12:45:14.703411 3011 log.go:172] (0xc0006d0420) (0xc00069d4a0) Stream added, broadcasting: 1\nI0501 12:45:14.706024 3011 log.go:172] (0xc0006d0420) Reply frame received for 1\nI0501 12:45:14.706086 3011 log.go:172] (0xc0006d0420) (0xc00035e000) Create stream\nI0501 12:45:14.706160 3011 log.go:172] (0xc0006d0420) (0xc00035e000) Stream added, broadcasting: 3\nI0501 12:45:14.707151 3011 log.go:172] (0xc0006d0420) Reply frame received for 3\nI0501 12:45:14.707197 3011 log.go:172] (0xc0006d0420) (0xc00069d540) Create stream\nI0501 12:45:14.707211 3011 log.go:172] (0xc0006d0420) (0xc00069d540) Stream added, broadcasting: 5\nI0501 12:45:14.708109 3011 log.go:172] (0xc0006d0420) Reply frame received for 5\nI0501 12:45:14.774203 3011 log.go:172] (0xc0006d0420) Data frame received for 5\nI0501 12:45:14.774327 3011 log.go:172] (0xc00069d540) (5) Data frame handling\nI0501 12:45:14.774356 3011 log.go:172] (0xc0006d0420) Data frame received for 3\nI0501 12:45:14.774364 3011 log.go:172] (0xc00035e000) (3) Data frame handling\nI0501 12:45:14.774373 3011 log.go:172] (0xc00035e000) (3) Data frame sent\nI0501 12:45:14.774379 3011 log.go:172] (0xc0006d0420) Data frame received for 3\nI0501 12:45:14.774393 3011 log.go:172] (0xc00035e000) (3) Data frame handling\nI0501 12:45:14.775612 3011 log.go:172] (0xc0006d0420) Data frame received for 1\nI0501 12:45:14.775649 3011 log.go:172] (0xc00069d4a0) (1) Data frame handling\nI0501 12:45:14.775663 3011 log.go:172] (0xc00069d4a0) (1) Data frame sent\nI0501 12:45:14.775682 3011 log.go:172] (0xc0006d0420) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0501 12:45:14.775699 3011 log.go:172] (0xc0006d0420) Go away received\nI0501 12:45:14.775903 3011 log.go:172] (0xc0006d0420) (0xc00069d4a0) Stream removed, broadcasting: 1\nI0501 12:45:14.775918 3011 log.go:172] (0xc0006d0420) (0xc00035e000) Stream removed, broadcasting: 3\nI0501 12:45:14.775925 3011 log.go:172] (0xc0006d0420) (0xc00069d540) Stream removed, broadcasting: 5\n" May 1 12:45:14.783: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 1 12:45:14.783: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 1 12:45:14.783: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 12:45:44.799: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dg7pm May 1 12:45:44.802: INFO: Scaling statefulset ss to 0 May 1 12:45:44.811: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:45:44.813: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:45:44.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dg7pm" for this suite. May 1 12:45:50.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:45:50.939: INFO: namespace: e2e-tests-statefulset-dg7pm, resource: bindings, ignored listing per whitelist May 1 12:45:50.968: INFO: namespace e2e-tests-statefulset-dg7pm deletion completed in 6.127823878s • [SLOW TEST:98.767 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:45:50.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 1 12:45:51.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-w95gg" to be "success or failure" May 1 12:45:51.136: INFO: Pod "downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.284225ms May 1 12:45:53.290: INFO: Pod "downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171709081s May 1 12:45:55.295: INFO: Pod "downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176669526s STEP: Saw pod success May 1 12:45:55.295: INFO: Pod "downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:45:55.298: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017 container client-container: STEP: delete the pod May 1 12:45:55.327: INFO: Waiting for pod downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017 to disappear May 1 12:45:55.356: INFO: Pod downwardapi-volume-b042aef9-8ba9-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:45:55.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w95gg" for this suite. May 1 12:46:01.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:46:01.469: INFO: namespace: e2e-tests-projected-w95gg, resource: bindings, ignored listing per whitelist May 1 12:46:01.509: INFO: namespace e2e-tests-projected-w95gg deletion completed in 6.149711095s • [SLOW TEST:10.541 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:46:01.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:46:01.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-k5fx4" for this suite. May 1 12:46:07.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:46:07.790: INFO: namespace: e2e-tests-services-k5fx4, resource: bindings, ignored listing per whitelist May 1 12:46:07.852: INFO: namespace e2e-tests-services-k5fx4 deletion completed in 6.090401795s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.342 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:46:07.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 1 12:46:07.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:08.241: INFO: stderr: "" May 1 12:46:08.241: INFO: stdout: "pod/pause created\n" May 1 12:46:08.241: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 1 12:46:08.241: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-hv5tz" to be "running and ready" May 1 12:46:08.284: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.66174ms May 1 12:46:10.288: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047101647s May 1 12:46:12.292: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.051018873s May 1 12:46:12.292: INFO: Pod "pause" satisfied condition "running and ready" May 1 12:46:12.292: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 1 12:46:12.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.410: INFO: stderr: "" May 1 12:46:12.410: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 1 12:46:12.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.499: INFO: stderr: "" May 1 12:46:12.499: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 1 12:46:12.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.622: INFO: stderr: "" May 1 12:46:12.622: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 1 12:46:12.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.724: INFO: stderr: "" May 1 12:46:12.724: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 1 12:46:12.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.868: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 12:46:12.868: INFO: stdout: "pod \"pause\" force deleted\n" May 1 12:46:12.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-hv5tz' May 1 12:46:12.974: INFO: stderr: "No resources found.\n" May 1 12:46:12.974: INFO: stdout: "" May 1 12:46:12.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-hv5tz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 12:46:13.084: INFO: stderr: "" May 1 12:46:13.084: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:46:13.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hv5tz" for this suite. May 1 12:46:19.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:46:19.216: INFO: namespace: e2e-tests-kubectl-hv5tz, resource: bindings, ignored listing per whitelist May 1 12:46:19.259: INFO: namespace e2e-tests-kubectl-hv5tz deletion completed in 6.170576391s • [SLOW TEST:11.407 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:46:19.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-km9k6 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 1 12:46:19.862: INFO: Found 0 stateful pods, waiting for 3 May 1 12:46:29.867: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:46:29.867: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:46:29.867: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 12:46:39.867: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:46:39.867: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:46:39.867: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 1 12:46:39.893: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 1 12:46:49.934: INFO: Updating stateful set ss2 May 1 12:46:49.955: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 12:46:59.963: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 1 12:47:10.585: INFO: Found 2 stateful pods, waiting for 3 May 1 12:47:20.591: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 12:47:20.591: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 12:47:20.591: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 1 12:47:20.616: INFO: Updating stateful set ss2 May 1 12:47:20.659: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 12:47:30.668: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 12:47:40.686: INFO: Updating stateful set ss2 May 1 12:47:40.699: INFO: Waiting for StatefulSet e2e-tests-statefulset-km9k6/ss2 to complete update May 1 12:47:40.699: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 1 12:47:50.708: INFO: Waiting for StatefulSet e2e-tests-statefulset-km9k6/ss2 to complete update May 1 12:47:50.708: INFO: Waiting for Pod e2e-tests-statefulset-km9k6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 1 12:48:00.708: INFO: Deleting all statefulset in ns e2e-tests-statefulset-km9k6 May 1 12:48:00.711: INFO: Scaling statefulset ss2 to 0 May 1 12:48:40.747: INFO: Waiting for statefulset status.replicas updated to 0 May 1 12:48:40.751: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:48:40.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-km9k6" for this suite. May 1 12:48:48.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:48:48.843: INFO: namespace: e2e-tests-statefulset-km9k6, resource: bindings, ignored listing per whitelist May 1 12:48:48.871: INFO: namespace e2e-tests-statefulset-km9k6 deletion completed in 8.076926551s • [SLOW TEST:149.612 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:48:48.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1a51ba3f-8baa-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:48:49.057: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017" in namespace "e2e-tests-projected-dd4jc" to be "success or failure" May 1 12:48:49.075: INFO: Pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.046266ms May 1 12:48:51.079: INFO: Pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021603441s May 1 12:48:53.083: INFO: Pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026263567s May 1 12:48:55.087: INFO: Pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029718721s STEP: Saw pod success May 1 12:48:55.087: INFO: Pod "pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:48:55.089: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 1 12:48:55.128: INFO: Waiting for pod pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017 to disappear May 1 12:48:55.138: INFO: Pod pod-projected-configmaps-1a5243dd-8baa-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:48:55.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dd4jc" for this suite. May 1 12:49:01.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:49:01.273: INFO: namespace: e2e-tests-projected-dd4jc, resource: bindings, ignored listing per whitelist May 1 12:49:01.331: INFO: namespace e2e-tests-projected-dd4jc deletion completed in 6.169593995s • [SLOW TEST:12.460 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:49:01.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 1 12:49:08.900: INFO: 9 pods remaining May 1 12:49:08.900: INFO: 8 pods has nil DeletionTimestamp May 1 12:49:08.900: INFO: May 1 12:49:10.230: INFO: 8 pods remaining May 1 12:49:10.230: INFO: 0 pods has nil DeletionTimestamp May 1 12:49:10.230: INFO: May 1 12:49:11.514: INFO: 0 pods remaining May 1 12:49:11.514: INFO: 0 pods has nil DeletionTimestamp May 1 12:49:11.514: INFO: STEP: Gathering metrics W0501 12:49:12.148236 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 12:49:12.148: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:49:12.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ckksz" for this suite. May 1 12:49:20.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:49:20.751: INFO: namespace: e2e-tests-gc-ckksz, resource: bindings, ignored listing per whitelist May 1 12:49:20.754: INFO: namespace e2e-tests-gc-ckksz deletion completed in 8.603106735s • [SLOW TEST:19.423 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:49:20.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 1 12:49:20.879: INFO: Waiting up to 5m0s for pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017" in namespace "e2e-tests-downward-api-cqhxw" to be "success or failure" May 1 12:49:20.883: INFO: Pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821863ms May 1 12:49:23.111: INFO: Pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231755196s May 1 12:49:25.185: INFO: Pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.305930003s May 1 12:49:27.264: INFO: Pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.385250212s STEP: Saw pod success May 1 12:49:27.264: INFO: Pod "downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:49:27.424: INFO: Trying to get logs from node hunter-worker2 pod downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017 container dapi-container: STEP: delete the pod May 1 12:49:27.593: INFO: Waiting for pod downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017 to disappear May 1 12:49:27.638: INFO: Pod downward-api-2d47ff3d-8baa-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:49:27.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cqhxw" for this suite. May 1 12:49:34.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:49:34.060: INFO: namespace: e2e-tests-downward-api-cqhxw, resource: bindings, ignored listing per whitelist May 1 12:49:34.124: INFO: namespace e2e-tests-downward-api-cqhxw deletion completed in 6.483394924s • [SLOW TEST:13.369 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:49:34.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-35460eaa-8baa-11ea-88a3-0242ac110017 STEP: Creating a pod to test consume configMaps May 1 12:49:34.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017" in namespace "e2e-tests-configmap-dkvpl" to be "success or failure" May 1 12:49:34.331: INFO: Pod "pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.704984ms May 1 12:49:36.383: INFO: Pod "pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067394248s May 1 12:49:38.387: INFO: Pod "pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071371012s STEP: Saw pod success May 1 12:49:38.387: INFO: Pod "pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017" satisfied condition "success or failure" May 1 12:49:38.389: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017 container configmap-volume-test: STEP: delete the pod May 1 12:49:38.428: INFO: Waiting for pod pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017 to disappear May 1 12:49:38.444: INFO: Pod pod-configmaps-3546ac55-8baa-11ea-88a3-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:49:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dkvpl" for this suite. May 1 12:49:44.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:49:44.569: INFO: namespace: e2e-tests-configmap-dkvpl, resource: bindings, ignored listing per whitelist May 1 12:49:44.582: INFO: namespace e2e-tests-configmap-dkvpl deletion completed in 6.134439623s • [SLOW TEST:10.459 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 1 12:49:44.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 1 12:49:48.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9778f" for this suite. May 1 12:50:34.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 1 12:50:34.913: INFO: namespace: e2e-tests-kubelet-test-9778f, resource: bindings, ignored listing per whitelist May 1 12:50:34.936: INFO: namespace e2e-tests-kubelet-test-9778f deletion completed in 46.098760485s • [SLOW TEST:50.354 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSMay 1 12:50:34.937: INFO: Running AfterSuite actions on all nodes May 1 12:50:34.937: INFO: Running AfterSuite actions on node 1 May 1 12:50:34.937: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-storage] Projected downwardAPI [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2395 Ran 200 of 2164 Specs in 7420.952 seconds FAIL! -- 199 Passed | 1 Failed | 0 Pending | 1964 Skipped --- FAIL: TestE2E (7421.15s) FAIL