I0723 10:46:53.547904 6 e2e.go:224] Starting e2e run "d1f26527-ccd1-11ea-92a5-0242ac11000b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595501213 - Will randomize all specs Will run 201 of 2164 specs Jul 23 10:46:53.707: INFO: >>> kubeConfig: /root/.kube/config Jul 23 10:46:53.710: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 23 10:46:53.729: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 23 10:46:53.761: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 23 10:46:53.761: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 23 10:46:53.761: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 23 10:46:53.768: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 23 10:46:53.768: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 23 10:46:53.768: INFO: e2e test version: v1.13.12 Jul 23 10:46:53.769: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:46:53.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jul 23 10:46:53.935: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d2758b8d-ccd1-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 10:46:53.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-p6q7k" to be "success or failure" Jul 23 10:46:53.957: INFO: Pod "pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749417ms Jul 23 10:46:55.961: INFO: Pod "pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008774283s Jul 23 10:46:57.964: INFO: Pod "pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011770588s STEP: Saw pod success Jul 23 10:46:57.964: INFO: Pod "pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:46:57.966: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b container configmap-volume-test: STEP: delete the pod Jul 23 10:46:58.115: INFO: Waiting for pod pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b to disappear Jul 23 10:46:58.265: INFO: Pod pod-configmaps-d2761e7a-ccd1-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:46:58.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-p6q7k" for this suite. Jul 23 10:47:04.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:47:04.394: INFO: namespace: e2e-tests-configmap-p6q7k, resource: bindings, ignored listing per whitelist Jul 23 10:47:04.468: INFO: namespace e2e-tests-configmap-p6q7k deletion completed in 6.197929165s • [SLOW TEST:10.699 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:47:04.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lbkbn [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jul 23 10:47:04.683: INFO: Found 0 stateful pods, waiting for 3 Jul 23 10:47:14.871: INFO: Found 2 stateful pods, waiting for 3 Jul 23 10:47:24.702: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 23 10:47:24.702: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 23 10:47:24.702: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 23 10:47:24.729: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 23 10:47:34.932: INFO: Updating stateful set ss2 Jul 23 10:47:34.957: INFO: Waiting for Pod e2e-tests-statefulset-lbkbn/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jul 23 10:47:45.452: INFO: Found 2 stateful pods, waiting for 3 Jul 23 10:47:55.783: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 23 10:47:55.783: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 23 10:47:55.783: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 23 10:47:56.043: INFO: Updating stateful set ss2 Jul 23 10:47:56.325: INFO: Waiting for Pod e2e-tests-statefulset-lbkbn/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 23 10:48:06.411: INFO: Updating stateful set ss2 Jul 23 10:48:06.420: INFO: Waiting for StatefulSet e2e-tests-statefulset-lbkbn/ss2 to complete update Jul 23 10:48:06.420: INFO: Waiting for Pod e2e-tests-statefulset-lbkbn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jul 23 10:48:16.429: INFO: Waiting for StatefulSet e2e-tests-statefulset-lbkbn/ss2 to complete update Jul 23 10:48:16.429: INFO: Waiting for Pod e2e-tests-statefulset-lbkbn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 23 10:48:26.428: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lbkbn Jul 23 10:48:26.431: INFO: Scaling statefulset ss2 to 0 Jul 23 10:48:56.469: INFO: Waiting for statefulset status.replicas updated to 0 Jul 23 10:48:56.471: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:48:56.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lbkbn" for this suite. Jul 23 10:49:04.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:49:04.596: INFO: namespace: e2e-tests-statefulset-lbkbn, resource: bindings, ignored listing per whitelist Jul 23 10:49:04.610: INFO: namespace e2e-tests-statefulset-lbkbn deletion completed in 8.117084233s • [SLOW TEST:120.141 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:49:04.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 10:49:04.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-wfcmv" to be "success or failure" Jul 23 10:49:04.790: INFO: Pod "downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.143906ms Jul 23 10:49:06.794: INFO: Pod "downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036991163s Jul 23 10:49:08.797: INFO: Pod "downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040369529s STEP: Saw pod success Jul 23 10:49:08.797: INFO: Pod "downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:49:08.800: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 10:49:08.826: INFO: Waiting for pod downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:49:08.888: INFO: Pod downwardapi-volume-206c3d71-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:49:08.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wfcmv" for this suite. Jul 23 10:49:14.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:49:14.936: INFO: namespace: e2e-tests-projected-wfcmv, resource: bindings, ignored listing per whitelist Jul 23 10:49:14.984: INFO: namespace e2e-tests-projected-wfcmv deletion completed in 6.091193061s • [SLOW TEST:10.373 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:49:14.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jul 23 10:49:15.118: INFO: Waiting up to 5m0s for pod "client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-containers-ltbq8" to be "success or failure" Jul 23 10:49:15.147: INFO: Pod "client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.724956ms Jul 23 10:49:17.151: INFO: Pod "client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03255588s Jul 23 10:49:19.155: INFO: Pod "client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037078925s STEP: Saw pod success Jul 23 10:49:19.155: INFO: Pod "client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:49:19.158: INFO: Trying to get logs from node hunter-worker pod client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 10:49:19.197: INFO: Waiting for pod client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:49:19.213: INFO: Pod client-containers-269b38ba-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:49:19.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-ltbq8" for this suite. Jul 23 10:49:25.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:49:25.298: INFO: namespace: e2e-tests-containers-ltbq8, resource: bindings, ignored listing per whitelist Jul 23 10:49:25.312: INFO: namespace e2e-tests-containers-ltbq8 deletion completed in 6.094528806s • [SLOW TEST:10.327 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:49:25.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:49:25.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kzhdb" for this suite. Jul 23 10:49:31.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:49:31.616: INFO: namespace: e2e-tests-kubelet-test-kzhdb, resource: bindings, ignored listing per whitelist Jul 23 10:49:31.635: INFO: namespace e2e-tests-kubelet-test-kzhdb deletion completed in 6.100093654s • [SLOW TEST:6.323 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:49:31.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 23 10:49:36.506: INFO: Successfully updated pod "annotationupdate30947725-ccd2-11ea-92a5-0242ac11000b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:49:38.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5lv72" for this suite. Jul 23 10:50:01.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:50:01.207: INFO: namespace: e2e-tests-downward-api-5lv72, resource: bindings, ignored listing per whitelist Jul 23 10:50:01.255: INFO: namespace e2e-tests-downward-api-5lv72 deletion completed in 22.120750152s • [SLOW TEST:29.619 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:50:01.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 10:50:01.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-k8s8g" to be "success or failure" Jul 23 10:50:01.441: INFO: Pod "downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.305477ms Jul 23 10:50:03.640: INFO: Pod "downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210267912s Jul 23 10:50:05.644: INFO: Pod "downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2142068s STEP: Saw pod success Jul 23 10:50:05.644: INFO: Pod "downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:50:05.647: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 10:50:05.729: INFO: Waiting for pod downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:50:05.747: INFO: Pod downwardapi-volume-4232105f-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:50:05.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k8s8g" for this suite. Jul 23 10:50:11.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:50:11.823: INFO: namespace: e2e-tests-projected-k8s8g, resource: bindings, ignored listing per whitelist Jul 23 10:50:11.847: INFO: namespace e2e-tests-projected-k8s8g deletion completed in 6.096870337s • [SLOW TEST:10.592 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:50:11.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 23 10:50:16.499: INFO: Successfully updated pod "labelsupdate487895ad-ccd2-11ea-92a5-0242ac11000b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:50:18.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vsfjg" for this suite. Jul 23 10:50:42.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:50:42.742: INFO: namespace: e2e-tests-downward-api-vsfjg, resource: bindings, ignored listing per whitelist Jul 23 10:50:42.742: INFO: namespace e2e-tests-downward-api-vsfjg deletion completed in 24.198606003s • [SLOW TEST:30.895 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:50:42.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jul 23 10:50:47.548: INFO: Pod pod-hostip-5af9e182-ccd2-11ea-92a5-0242ac11000b has hostIP: 172.18.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:50:47.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kktk9" for this suite. Jul 23 10:51:09.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:51:09.877: INFO: namespace: e2e-tests-pods-kktk9, resource: bindings, ignored listing per whitelist Jul 23 10:51:09.889: INFO: namespace e2e-tests-pods-kktk9 deletion completed in 22.325565957s • [SLOW TEST:27.147 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:51:09.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:51:16.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-89zjw" for this suite. Jul 23 10:51:24.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:51:24.355: INFO: namespace: e2e-tests-namespaces-89zjw, resource: bindings, ignored listing per whitelist Jul 23 10:51:24.432: INFO: namespace e2e-tests-namespaces-89zjw deletion completed in 8.115083678s STEP: Destroying namespace "e2e-tests-nsdeletetest-skmzq" for this suite. Jul 23 10:51:24.434: INFO: Namespace e2e-tests-nsdeletetest-skmzq was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-tw9mc" for this suite. Jul 23 10:51:30.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:51:30.544: INFO: namespace: e2e-tests-nsdeletetest-tw9mc, resource: bindings, ignored listing per whitelist Jul 23 10:51:30.558: INFO: namespace e2e-tests-nsdeletetest-tw9mc deletion completed in 6.124605919s • [SLOW TEST:20.669 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:51:30.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 23 10:51:31.144: INFO: Waiting up to 5m0s for pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-hnmrr" to be "success or failure" Jul 23 10:51:31.182: INFO: Pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.406586ms Jul 23 10:51:33.232: INFO: Pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088655507s Jul 23 10:51:35.236: INFO: Pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.092715468s Jul 23 10:51:37.241: INFO: Pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096919576s STEP: Saw pod success Jul 23 10:51:37.241: INFO: Pod "pod-77af12d9-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:51:37.243: INFO: Trying to get logs from node hunter-worker pod pod-77af12d9-ccd2-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 10:51:37.266: INFO: Waiting for pod pod-77af12d9-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:51:37.283: INFO: Pod pod-77af12d9-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:51:37.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hnmrr" for this suite. Jul 23 10:51:43.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:51:43.389: INFO: namespace: e2e-tests-emptydir-hnmrr, resource: bindings, ignored listing per whitelist Jul 23 10:51:43.419: INFO: namespace e2e-tests-emptydir-hnmrr deletion completed in 6.131845135s • [SLOW TEST:12.860 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:51:43.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7f173def-ccd2-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 10:51:43.670: INFO: Waiting up to 5m0s for pod "pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-2f89v" to be "success or failure" Jul 23 10:51:43.673: INFO: Pod "pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210132ms Jul 23 10:51:45.677: INFO: Pod "pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007342776s Jul 23 10:51:47.681: INFO: Pod "pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010895538s STEP: Saw pod success Jul 23 10:51:47.681: INFO: Pod "pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:51:47.683: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b container secret-volume-test: STEP: delete the pod Jul 23 10:51:47.709: INFO: Waiting for pod pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:51:47.801: INFO: Pod pod-secrets-7f192c9c-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:51:47.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2f89v" for this suite. Jul 23 10:51:53.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:51:53.916: INFO: namespace: e2e-tests-secrets-2f89v, resource: bindings, ignored listing per whitelist Jul 23 10:51:53.920: INFO: namespace e2e-tests-secrets-2f89v deletion completed in 6.114346528s • [SLOW TEST:10.501 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:51:53.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rhxj9 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 23 10:51:54.528: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 23 10:52:27.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.172:8080/dial?request=hostName&protocol=http&host=10.244.1.171&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rhxj9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 10:52:27.842: INFO: >>> kubeConfig: /root/.kube/config I0723 10:52:27.872297 6 log.go:172] (0xc000e1b080) (0xc001d8a140) Create stream I0723 10:52:27.872325 6 log.go:172] (0xc000e1b080) (0xc001d8a140) Stream added, broadcasting: 1 I0723 10:52:27.875006 6 log.go:172] (0xc000e1b080) Reply frame received for 1 I0723 10:52:27.875040 6 log.go:172] (0xc000e1b080) (0xc001cbe000) Create stream I0723 10:52:27.875051 6 log.go:172] (0xc000e1b080) (0xc001cbe000) Stream added, broadcasting: 3 I0723 10:52:27.876155 6 log.go:172] (0xc000e1b080) Reply frame received for 3 I0723 10:52:27.876282 6 log.go:172] (0xc000e1b080) (0xc001d8a1e0) Create stream I0723 10:52:27.876341 6 log.go:172] (0xc000e1b080) (0xc001d8a1e0) Stream added, broadcasting: 5 I0723 10:52:27.878214 6 log.go:172] (0xc000e1b080) Reply frame received for 5 I0723 10:52:28.100154 6 log.go:172] (0xc000e1b080) Data frame received for 3 I0723 10:52:28.100189 6 log.go:172] (0xc001cbe000) (3) Data frame handling I0723 10:52:28.100210 6 log.go:172] (0xc001cbe000) (3) Data frame sent I0723 10:52:28.101161 6 log.go:172] (0xc000e1b080) Data frame received for 5 I0723 10:52:28.101232 6 log.go:172] (0xc001d8a1e0) (5) Data frame handling I0723 10:52:28.101417 6 log.go:172] (0xc000e1b080) Data frame received for 3 I0723 10:52:28.101448 6 log.go:172] (0xc001cbe000) (3) Data frame handling I0723 10:52:28.103835 6 log.go:172] (0xc000e1b080) Data frame received for 1 I0723 10:52:28.103866 6 log.go:172] (0xc001d8a140) (1) Data frame handling I0723 10:52:28.103881 6 log.go:172] (0xc001d8a140) (1) Data frame sent I0723 10:52:28.103895 6 log.go:172] (0xc000e1b080) (0xc001d8a140) Stream removed, broadcasting: 1 I0723 10:52:28.104159 6 log.go:172] (0xc000e1b080) (0xc001d8a140) Stream removed, broadcasting: 1 I0723 10:52:28.104194 6 log.go:172] (0xc000e1b080) Go away received I0723 10:52:28.104250 6 log.go:172] (0xc000e1b080) (0xc001cbe000) Stream removed, broadcasting: 3 I0723 10:52:28.104307 6 log.go:172] (0xc000e1b080) (0xc001d8a1e0) Stream removed, broadcasting: 5 Jul 23 10:52:28.104: INFO: Waiting for endpoints: map[] Jul 23 10:52:28.269: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.172:8080/dial?request=hostName&protocol=http&host=10.244.2.226&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rhxj9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 10:52:28.270: INFO: >>> kubeConfig: /root/.kube/config I0723 10:52:28.296279 6 log.go:172] (0xc00186a4d0) (0xc0016e0500) Create stream I0723 10:52:28.296326 6 log.go:172] (0xc00186a4d0) (0xc0016e0500) Stream added, broadcasting: 1 I0723 10:52:28.300167 6 log.go:172] (0xc00186a4d0) Reply frame received for 1 I0723 10:52:28.300202 6 log.go:172] (0xc00186a4d0) (0xc001cbe1e0) Create stream I0723 10:52:28.300214 6 log.go:172] (0xc00186a4d0) (0xc001cbe1e0) Stream added, broadcasting: 3 I0723 10:52:28.301140 6 log.go:172] (0xc00186a4d0) Reply frame received for 3 I0723 10:52:28.301168 6 log.go:172] (0xc00186a4d0) (0xc0016e05a0) Create stream I0723 10:52:28.301178 6 log.go:172] (0xc00186a4d0) (0xc0016e05a0) Stream added, broadcasting: 5 I0723 10:52:28.301934 6 log.go:172] (0xc00186a4d0) Reply frame received for 5 I0723 10:52:28.355456 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 10:52:28.355498 6 log.go:172] (0xc001cbe1e0) (3) Data frame handling I0723 10:52:28.355517 6 log.go:172] (0xc001cbe1e0) (3) Data frame sent I0723 10:52:28.356027 6 log.go:172] (0xc00186a4d0) Data frame received for 5 I0723 10:52:28.356061 6 log.go:172] (0xc0016e05a0) (5) Data frame handling I0723 10:52:28.356839 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 10:52:28.356863 6 log.go:172] (0xc001cbe1e0) (3) Data frame handling I0723 10:52:28.358345 6 log.go:172] (0xc00186a4d0) Data frame received for 1 I0723 10:52:28.358383 6 log.go:172] (0xc0016e0500) (1) Data frame handling I0723 10:52:28.358406 6 log.go:172] (0xc0016e0500) (1) Data frame sent I0723 10:52:28.358430 6 log.go:172] (0xc00186a4d0) (0xc0016e0500) Stream removed, broadcasting: 1 I0723 10:52:28.358456 6 log.go:172] (0xc00186a4d0) Go away received I0723 10:52:28.358508 6 log.go:172] (0xc00186a4d0) (0xc0016e0500) Stream removed, broadcasting: 1 I0723 10:52:28.358525 6 log.go:172] (0xc00186a4d0) (0xc001cbe1e0) Stream removed, broadcasting: 3 I0723 10:52:28.358531 6 log.go:172] (0xc00186a4d0) (0xc0016e05a0) Stream removed, broadcasting: 5 Jul 23 10:52:28.358: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:52:28.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-rhxj9" for this suite. Jul 23 10:52:46.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:52:46.438: INFO: namespace: e2e-tests-pod-network-test-rhxj9, resource: bindings, ignored listing per whitelist Jul 23 10:52:46.492: INFO: namespace e2e-tests-pod-network-test-rhxj9 deletion completed in 18.130243188s • [SLOW TEST:52.573 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:52:46.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 10:52:46.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-gz9h2" to be "success or failure" Jul 23 10:52:46.595: INFO: Pod "downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.463367ms Jul 23 10:52:48.599: INFO: Pod "downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02570262s Jul 23 10:52:50.603: INFO: Pod "downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030090067s STEP: Saw pod success Jul 23 10:52:50.603: INFO: Pod "downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:52:50.606: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 10:52:50.666: INFO: Waiting for pod downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:52:50.892: INFO: Pod downwardapi-volume-a4a4a191-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:52:50.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gz9h2" for this suite. Jul 23 10:52:56.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:52:56.924: INFO: namespace: e2e-tests-downward-api-gz9h2, resource: bindings, ignored listing per whitelist Jul 23 10:52:56.986: INFO: namespace e2e-tests-downward-api-gz9h2 deletion completed in 6.090328128s • [SLOW TEST:10.493 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:52:56.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0723 10:52:58.271482 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 23 10:52:58.271: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:52:58.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b58p9" for this suite. Jul 23 10:53:04.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:53:04.353: INFO: namespace: e2e-tests-gc-b58p9, resource: bindings, ignored listing per whitelist Jul 23 10:53:04.365: INFO: namespace e2e-tests-gc-b58p9 deletion completed in 6.090715371s • [SLOW TEST:7.380 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:53:04.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 10:53:04.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-6ktqb" to be "success or failure" Jul 23 10:53:04.482: INFO: Pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.308671ms Jul 23 10:53:06.486: INFO: Pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007758698s Jul 23 10:53:08.490: INFO: Pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011924219s Jul 23 10:53:10.495: INFO: Pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016597265s STEP: Saw pod success Jul 23 10:53:10.495: INFO: Pod "downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:53:10.499: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 10:53:10.521: INFO: Waiting for pod downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:53:10.539: INFO: Pod downwardapi-volume-af4eac60-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:53:10.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ktqb" for this suite. Jul 23 10:53:16.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:53:18.168: INFO: namespace: e2e-tests-downward-api-6ktqb, resource: bindings, ignored listing per whitelist Jul 23 10:53:18.169: INFO: namespace e2e-tests-downward-api-6ktqb deletion completed in 7.626784017s • [SLOW TEST:13.803 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:53:18.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jul 23 10:53:18.778: INFO: Waiting up to 5m0s for pod "var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-var-expansion-nnf9w" to be "success or failure" Jul 23 10:53:18.794: INFO: Pod "var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.750344ms Jul 23 10:53:20.798: INFO: Pod "var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01976093s Jul 23 10:53:22.801: INFO: Pod "var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022682311s STEP: Saw pod success Jul 23 10:53:22.801: INFO: Pod "var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:53:22.848: INFO: Trying to get logs from node hunter-worker pod var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 10:53:23.351: INFO: Waiting for pod var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:53:23.411: INFO: Pod var-expansion-b7aeda63-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:53:23.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-nnf9w" for this suite. Jul 23 10:53:29.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:53:29.531: INFO: namespace: e2e-tests-var-expansion-nnf9w, resource: bindings, ignored listing per whitelist Jul 23 10:53:29.688: INFO: namespace e2e-tests-var-expansion-nnf9w deletion completed in 6.27247067s • [SLOW TEST:11.519 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:53:29.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-be742f60-ccd2-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 10:53:29.941: INFO: Waiting up to 5m0s for pod "pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-9xkqj" to be "success or failure" Jul 23 10:53:29.950: INFO: Pod "pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.106896ms Jul 23 10:53:31.953: INFO: Pod "pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01272243s Jul 23 10:53:33.969: INFO: Pod "pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028631958s STEP: Saw pod success Jul 23 10:53:33.969: INFO: Pod "pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:53:34.156: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b container secret-volume-test: STEP: delete the pod Jul 23 10:53:34.212: INFO: Waiting for pod pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:53:34.244: INFO: Pod pod-secrets-be783ca7-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:53:34.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9xkqj" for this suite. Jul 23 10:53:40.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:53:40.340: INFO: namespace: e2e-tests-secrets-9xkqj, resource: bindings, ignored listing per whitelist Jul 23 10:53:40.392: INFO: namespace e2e-tests-secrets-9xkqj deletion completed in 6.143194172s • [SLOW TEST:10.703 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:53:40.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nb87g [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-nb87g STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-nb87g STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-nb87g STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-nb87g Jul 23 10:53:48.584: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nb87g, name: ss-0, uid: c93f0ad8-ccd2-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete. Jul 23 10:53:48.803: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nb87g, name: ss-0, uid: c93f0ad8-ccd2-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete. Jul 23 10:53:48.810: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-nb87g, name: ss-0, uid: c93f0ad8-ccd2-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete. Jul 23 10:53:48.832: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-nb87g STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-nb87g STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-nb87g and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 23 10:53:55.157: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nb87g Jul 23 10:53:55.192: INFO: Scaling statefulset ss to 0 Jul 23 10:54:15.673: INFO: Waiting for statefulset status.replicas updated to 0 Jul 23 10:54:15.676: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:54:15.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nb87g" for this suite. Jul 23 10:54:22.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:54:22.176: INFO: namespace: e2e-tests-statefulset-nb87g, resource: bindings, ignored listing per whitelist Jul 23 10:54:22.253: INFO: namespace e2e-tests-statefulset-nb87g deletion completed in 6.535627525s • [SLOW TEST:41.861 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:54:22.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jul 23 10:54:22.391: INFO: Waiting up to 5m0s for pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-var-expansion-kvdqb" to be "success or failure" Jul 23 10:54:22.394: INFO: Pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.853164ms Jul 23 10:54:24.399: INFO: Pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007022826s Jul 23 10:54:26.403: INFO: Pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011197334s Jul 23 10:54:28.407: INFO: Pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01552591s STEP: Saw pod success Jul 23 10:54:28.407: INFO: Pod "var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:54:28.411: INFO: Trying to get logs from node hunter-worker pod var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 10:54:28.434: INFO: Waiting for pod var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b to disappear Jul 23 10:54:28.437: INFO: Pod var-expansion-ddc1777d-ccd2-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:54:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kvdqb" for this suite. Jul 23 10:54:36.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:54:36.466: INFO: namespace: e2e-tests-var-expansion-kvdqb, resource: bindings, ignored listing per whitelist Jul 23 10:54:36.535: INFO: namespace e2e-tests-var-expansion-kvdqb deletion completed in 8.09461188s • [SLOW TEST:14.282 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:54:36.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-4l7mg STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-4l7mg STEP: Deleting pre-stop pod Jul 23 10:54:49.708: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:54:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-4l7mg" for this suite. Jul 23 10:55:27.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:55:27.773: INFO: namespace: e2e-tests-prestop-4l7mg, resource: bindings, ignored listing per whitelist Jul 23 10:55:27.814: INFO: namespace e2e-tests-prestop-4l7mg deletion completed in 38.094605104s • [SLOW TEST:51.278 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:55:27.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 23 10:55:35.995: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:36.040: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:38.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:38.043: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:40.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:40.043: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:42.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:42.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:44.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:44.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:46.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:46.045: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:48.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:48.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:50.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:50.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:52.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:52.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:54.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:54.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:56.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:56.044: INFO: Pod pod-with-prestop-exec-hook still exists Jul 23 10:55:58.040: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 23 10:55:58.044: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:55:58.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jmbc4" for this suite. Jul 23 10:56:20.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:56:20.082: INFO: namespace: e2e-tests-container-lifecycle-hook-jmbc4, resource: bindings, ignored listing per whitelist Jul 23 10:56:20.150: INFO: namespace e2e-tests-container-lifecycle-hook-jmbc4 deletion completed in 22.095347643s • [SLOW TEST:52.336 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:56:20.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 23 10:56:20.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:22.763: INFO: stderr: "" Jul 23 10:56:22.763: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 23 10:56:22.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:22.942: INFO: stderr: "" Jul 23 10:56:22.942: INFO: stdout: "update-demo-nautilus-68xsx update-demo-nautilus-p6pr5 " Jul 23 10:56:22.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68xsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:23.033: INFO: stderr: "" Jul 23 10:56:23.033: INFO: stdout: "" Jul 23 10:56:23.033: INFO: update-demo-nautilus-68xsx is created but not running Jul 23 10:56:28.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.134: INFO: stderr: "" Jul 23 10:56:28.134: INFO: stdout: "update-demo-nautilus-68xsx update-demo-nautilus-p6pr5 " Jul 23 10:56:28.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68xsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.233: INFO: stderr: "" Jul 23 10:56:28.233: INFO: stdout: "true" Jul 23 10:56:28.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68xsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.328: INFO: stderr: "" Jul 23 10:56:28.328: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 23 10:56:28.328: INFO: validating pod update-demo-nautilus-68xsx Jul 23 10:56:28.333: INFO: got data: { "image": "nautilus.jpg" } Jul 23 10:56:28.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 23 10:56:28.333: INFO: update-demo-nautilus-68xsx is verified up and running Jul 23 10:56:28.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p6pr5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.431: INFO: stderr: "" Jul 23 10:56:28.431: INFO: stdout: "true" Jul 23 10:56:28.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p6pr5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.530: INFO: stderr: "" Jul 23 10:56:28.530: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 23 10:56:28.530: INFO: validating pod update-demo-nautilus-p6pr5 Jul 23 10:56:28.535: INFO: got data: { "image": "nautilus.jpg" } Jul 23 10:56:28.535: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 23 10:56:28.535: INFO: update-demo-nautilus-p6pr5 is verified up and running STEP: using delete to clean up resources Jul 23 10:56:28.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 23 10:56:28.641: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 23 10:56:28.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-g4sjl' Jul 23 10:56:28.750: INFO: stderr: "No resources found.\n" Jul 23 10:56:28.750: INFO: stdout: "" Jul 23 10:56:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-g4sjl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 23 10:56:28.847: INFO: stderr: "" Jul 23 10:56:28.847: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:56:28.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g4sjl" for this suite. Jul 23 10:56:50.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:56:50.897: INFO: namespace: e2e-tests-kubectl-g4sjl, resource: bindings, ignored listing per whitelist Jul 23 10:56:50.940: INFO: namespace e2e-tests-kubectl-g4sjl deletion completed in 22.088895972s • [SLOW TEST:30.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:56:50.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 23 10:56:51.149: INFO: Waiting up to 5m0s for pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-tl6sz" to be "success or failure" Jul 23 10:56:51.518: INFO: Pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 368.391812ms Jul 23 10:56:53.522: INFO: Pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372580426s Jul 23 10:56:55.680: INFO: Pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530513264s Jul 23 10:56:57.684: INFO: Pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.53501734s STEP: Saw pod success Jul 23 10:56:57.685: INFO: Pod "downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:56:57.688: INFO: Trying to get logs from node hunter-worker pod downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 10:56:57.753: INFO: Waiting for pod downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 10:56:57.769: INFO: Pod downward-api-3666aa66-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:56:57.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tl6sz" for this suite. Jul 23 10:57:03.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:57:03.837: INFO: namespace: e2e-tests-downward-api-tl6sz, resource: bindings, ignored listing per whitelist Jul 23 10:57:03.861: INFO: namespace e2e-tests-downward-api-tl6sz deletion completed in 6.087677423s • [SLOW TEST:12.920 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:57:03.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-7nwc4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7nwc4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 45.126.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.126.45_udp@PTR;check="$$(dig +tcp +noall +answer +search 45.126.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.126.45_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-7nwc4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-7nwc4.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-7nwc4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-7nwc4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 45.126.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.126.45_udp@PTR;check="$$(dig +tcp +noall +answer +search 45.126.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.126.45_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 23 10:57:10.195: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.198: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.201: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.237: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.239: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.242: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.245: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.247: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.250: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.253: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.256: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:10.275: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:15.279: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.282: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.285: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.294: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.318: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.321: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.324: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.326: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.329: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.332: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.335: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.338: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:15.355: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:20.280: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.283: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.286: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.300: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.672: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.675: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.678: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.681: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.684: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.688: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.691: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.693: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:20.729: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:25.280: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.284: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.287: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.299: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.322: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.325: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.328: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.331: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.333: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.336: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.339: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.343: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:25.439: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:30.280: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.284: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.287: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.299: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.319: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.322: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.325: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.327: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.330: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.333: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.335: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.338: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:30.356: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:35.281: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.284: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.286: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.296: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.327: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.330: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.333: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.335: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.337: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.339: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.341: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.343: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc from pod e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b: the server could not find the requested resource (get pods dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b) Jul 23 10:57:35.360: INFO: Lookups using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-7nwc4 wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-7nwc4 jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4 jessie_udp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@dns-test-service.e2e-tests-dns-7nwc4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-7nwc4.svc] Jul 23 10:57:41.195: INFO: DNS probes using e2e-tests-dns-7nwc4/dns-test-3e2cbb5d-ccd3-11ea-92a5-0242ac11000b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:57:41.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-7nwc4" for this suite. Jul 23 10:57:49.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:57:49.781: INFO: namespace: e2e-tests-dns-7nwc4, resource: bindings, ignored listing per whitelist Jul 23 10:57:49.781: INFO: namespace e2e-tests-dns-7nwc4 deletion completed in 8.291922204s • [SLOW TEST:45.920 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:57:49.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-gxkfz in namespace e2e-tests-proxy-hxpjc I0723 10:57:50.223609 6 runners.go:184] Created replication controller with name: proxy-service-gxkfz, namespace: e2e-tests-proxy-hxpjc, replica count: 1 I0723 10:57:51.274062 6 runners.go:184] proxy-service-gxkfz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 10:57:52.274293 6 runners.go:184] proxy-service-gxkfz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 10:57:53.274522 6 runners.go:184] proxy-service-gxkfz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 10:57:54.274732 6 runners.go:184] proxy-service-gxkfz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 23 10:57:54.277: INFO: setup took 4.267393636s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 23 10:57:54.285: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-hxpjc/pods/proxy-service-gxkfz-6k8t5/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 10:58:18.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-h4j8d" to be "success or failure" Jul 23 10:58:18.663: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 146.905235ms Jul 23 10:58:20.722: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206754142s Jul 23 10:58:22.976: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459992729s Jul 23 10:58:28.206: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.690808289s Jul 23 10:58:30.585: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069064137s Jul 23 10:58:32.849: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.332869325s STEP: Saw pod success Jul 23 10:58:32.849: INFO: Pod "downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:58:32.852: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 10:58:36.231: INFO: Waiting for pod downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 10:58:36.682: INFO: Pod downwardapi-volume-6a7a4912-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:58:36.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h4j8d" for this suite. Jul 23 10:58:44.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:58:44.742: INFO: namespace: e2e-tests-projected-h4j8d, resource: bindings, ignored listing per whitelist Jul 23 10:58:44.775: INFO: namespace e2e-tests-projected-h4j8d deletion completed in 8.090246435s • [SLOW TEST:28.460 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:58:44.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jul 23 10:58:55.648: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:59:20.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-kgtzr" for this suite. Jul 23 10:59:26.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:59:27.011: INFO: namespace: e2e-tests-namespaces-kgtzr, resource: bindings, ignored listing per whitelist Jul 23 10:59:27.018: INFO: namespace e2e-tests-namespaces-kgtzr deletion completed in 6.095888181s STEP: Destroying namespace "e2e-tests-nsdeletetest-8687w" for this suite. Jul 23 10:59:27.020: INFO: Namespace e2e-tests-nsdeletetest-8687w was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4g6pj" for this suite. Jul 23 10:59:33.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:59:33.097: INFO: namespace: e2e-tests-nsdeletetest-4g6pj, resource: bindings, ignored listing per whitelist Jul 23 10:59:33.125: INFO: namespace e2e-tests-nsdeletetest-4g6pj deletion completed in 6.104332365s • [SLOW TEST:48.349 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:59:33.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 10:59:33.301: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jul 23 10:59:33.310: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9w9pl/daemonsets","resourceVersion":"2347558"},"items":null} Jul 23 10:59:33.313: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9w9pl/pods","resourceVersion":"2347558"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:59:33.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9w9pl" for this suite. Jul 23 10:59:39.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:59:39.354: INFO: namespace: e2e-tests-daemonsets-9w9pl, resource: bindings, ignored listing per whitelist Jul 23 10:59:39.424: INFO: namespace e2e-tests-daemonsets-9w9pl deletion completed in 6.100388244s S [SKIPPING] [6.300 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 10:59:33.301: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:59:39.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9be3ec3b-ccd3-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 10:59:41.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-jqm5w" to be "success or failure" Jul 23 10:59:41.849: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 187.998144ms Jul 23 10:59:44.023: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361083378s Jul 23 10:59:46.485: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823104994s Jul 23 10:59:48.532: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.870354543s Jul 23 10:59:50.688: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.026866409s STEP: Saw pod success Jul 23 10:59:50.688: INFO: Pod "pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 10:59:50.692: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b container configmap-volume-test: STEP: delete the pod Jul 23 10:59:50.752: INFO: Waiting for pod pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 10:59:50.945: INFO: Pod pod-configmaps-9c080cb9-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:59:50.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jqm5w" for this suite. Jul 23 10:59:56.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 10:59:56.989: INFO: namespace: e2e-tests-configmap-jqm5w, resource: bindings, ignored listing per whitelist Jul 23 10:59:57.049: INFO: namespace e2e-tests-configmap-jqm5w deletion completed in 6.100336254s • [SLOW TEST:17.624 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 10:59:57.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jul 23 10:59:57.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix488121887/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 10:59:57.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2tn74" for this suite. Jul 23 11:00:03.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:00:03.332: INFO: namespace: e2e-tests-kubectl-2tn74, resource: bindings, ignored listing per whitelist Jul 23 11:00:03.342: INFO: namespace e2e-tests-kubectl-2tn74 deletion completed in 6.085634545s • [SLOW TEST:6.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:00:03.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:00:10.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-v8tg2" for this suite. Jul 23 11:00:32.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:00:32.886: INFO: namespace: e2e-tests-replication-controller-v8tg2, resource: bindings, ignored listing per whitelist Jul 23 11:00:32.937: INFO: namespace e2e-tests-replication-controller-v8tg2 deletion completed in 22.121120974s • [SLOW TEST:29.595 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:00:32.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-bab63972-ccd3-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:00:33.107: INFO: Waiting up to 5m0s for pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-2rtkv" to be "success or failure" Jul 23 11:00:33.126: INFO: Pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.761707ms Jul 23 11:00:35.148: INFO: Pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041202163s Jul 23 11:00:37.152: INFO: Pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.044742127s Jul 23 11:00:39.155: INFO: Pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047909706s STEP: Saw pod success Jul 23 11:00:39.155: INFO: Pod "pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:00:39.157: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b container configmap-volume-test: STEP: delete the pod Jul 23 11:00:39.178: INFO: Waiting for pod pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 11:00:39.233: INFO: Pod pod-configmaps-bab7c0e6-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:00:39.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2rtkv" for this suite. Jul 23 11:00:45.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:00:45.326: INFO: namespace: e2e-tests-configmap-2rtkv, resource: bindings, ignored listing per whitelist Jul 23 11:00:45.329: INFO: namespace e2e-tests-configmap-2rtkv deletion completed in 6.092344208s • [SLOW TEST:12.391 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:00:45.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-5g52m.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5g52m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5g52m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-5g52m.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-5g52m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5g52m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 23 11:00:55.641: INFO: DNS probes using e2e-tests-dns-5g52m/dns-test-c21207c3-ccd3-11ea-92a5-0242ac11000b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:00:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-5g52m" for this suite. Jul 23 11:01:02.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:01:02.831: INFO: namespace: e2e-tests-dns-5g52m, resource: bindings, ignored listing per whitelist Jul 23 11:01:02.865: INFO: namespace e2e-tests-dns-5g52m deletion completed in 7.111672426s • [SLOW TEST:17.535 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:01:02.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 23 11:01:03.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vcdjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-vcdjn/configmaps/e2e-watch-test-resource-version,UID:cca99d7f-ccd3-11ea-b2c9-0242ac120008,ResourceVersion:2348011,Generation:0,CreationTimestamp:2020-07-23 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 23 11:01:03.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vcdjn,SelfLink:/api/v1/namespaces/e2e-tests-watch-vcdjn/configmaps/e2e-watch-test-resource-version,UID:cca99d7f-ccd3-11ea-b2c9-0242ac120008,ResourceVersion:2348012,Generation:0,CreationTimestamp:2020-07-23 11:01:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:01:03.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vcdjn" for this suite. Jul 23 11:01:09.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:01:09.390: INFO: namespace: e2e-tests-watch-vcdjn, resource: bindings, ignored listing per whitelist Jul 23 11:01:09.490: INFO: namespace e2e-tests-watch-vcdjn deletion completed in 6.126040589s • [SLOW TEST:6.625 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:01:09.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d07bce89-ccd3-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:01:09.637: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-wcnd5" to be "success or failure" Jul 23 11:01:09.650: INFO: Pod "pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.193742ms Jul 23 11:01:11.654: INFO: Pod "pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016821144s Jul 23 11:01:13.658: INFO: Pod "pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020232168s STEP: Saw pod success Jul 23 11:01:13.658: INFO: Pod "pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:01:13.660: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Jul 23 11:01:13.924: INFO: Waiting for pod pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 11:01:14.108: INFO: Pod pod-projected-configmaps-d07e011e-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:01:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wcnd5" for this suite. Jul 23 11:01:20.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:01:20.210: INFO: namespace: e2e-tests-projected-wcnd5, resource: bindings, ignored listing per whitelist Jul 23 11:01:20.226: INFO: namespace e2e-tests-projected-wcnd5 deletion completed in 6.112624667s • [SLOW TEST:10.736 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:01:20.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 23 11:01:20.408: INFO: Waiting up to 5m0s for pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-nd9kp" to be "success or failure" Jul 23 11:01:20.430: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.946395ms Jul 23 11:01:22.434: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026245251s Jul 23 11:01:24.439: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0304726s Jul 23 11:01:26.442: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 6.034298136s Jul 23 11:01:28.447: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038330721s STEP: Saw pod success Jul 23 11:01:28.447: INFO: Pod "pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:01:28.450: INFO: Trying to get logs from node hunter-worker pod pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:01:28.470: INFO: Waiting for pod pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 11:01:28.487: INFO: Pod pod-d6e80bdc-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:01:28.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nd9kp" for this suite. Jul 23 11:01:34.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:01:34.569: INFO: namespace: e2e-tests-emptydir-nd9kp, resource: bindings, ignored listing per whitelist Jul 23 11:01:34.584: INFO: namespace e2e-tests-emptydir-nd9kp deletion completed in 6.094153361s • [SLOW TEST:14.358 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:01:34.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jul 23 11:01:35.268: INFO: created pod pod-service-account-defaultsa Jul 23 11:01:35.268: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 23 11:01:35.336: INFO: created pod pod-service-account-mountsa Jul 23 11:01:35.336: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 23 11:01:35.358: INFO: created pod pod-service-account-nomountsa Jul 23 11:01:35.358: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 23 11:01:35.393: INFO: created pod pod-service-account-defaultsa-mountspec Jul 23 11:01:35.393: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 23 11:01:35.420: INFO: created pod pod-service-account-mountsa-mountspec Jul 23 11:01:35.420: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 23 11:01:35.469: INFO: created pod pod-service-account-nomountsa-mountspec Jul 23 11:01:35.469: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 23 11:01:35.490: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 23 11:01:35.490: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 23 11:01:35.519: INFO: created pod pod-service-account-mountsa-nomountspec Jul 23 11:01:35.519: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 23 11:01:35.556: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 23 11:01:35.556: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:01:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-m6jtb" for this suite. Jul 23 11:02:09.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:02:09.810: INFO: namespace: e2e-tests-svcaccounts-m6jtb, resource: bindings, ignored listing per whitelist Jul 23 11:02:09.825: INFO: namespace e2e-tests-svcaccounts-m6jtb deletion completed in 34.171593444s • [SLOW TEST:35.240 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:02:09.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jul 23 11:02:10.019: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-4j9bv" to be "success or failure" Jul 23 11:02:10.045: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.544414ms Jul 23 11:02:12.223: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203893582s Jul 23 11:02:14.226: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207361473s Jul 23 11:02:16.277: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25788981s Jul 23 11:02:18.280: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261721078s STEP: Saw pod success Jul 23 11:02:18.281: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 23 11:02:18.284: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 23 11:02:18.829: INFO: Waiting for pod pod-host-path-test to disappear Jul 23 11:02:18.856: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:02:18.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-4j9bv" for this suite. Jul 23 11:02:25.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:02:25.091: INFO: namespace: e2e-tests-hostpath-4j9bv, resource: bindings, ignored listing per whitelist Jul 23 11:02:25.134: INFO: namespace e2e-tests-hostpath-4j9bv deletion completed in 6.274604224s • [SLOW TEST:15.309 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:02:25.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 23 11:02:25.416: INFO: Waiting up to 5m0s for pod "downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-cvfqs" to be "success or failure" Jul 23 11:02:25.444: INFO: Pod "downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.964091ms Jul 23 11:02:27.475: INFO: Pod "downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059561513s Jul 23 11:02:29.479: INFO: Pod "downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063389433s STEP: Saw pod success Jul 23 11:02:29.479: INFO: Pod "downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:02:29.482: INFO: Trying to get logs from node hunter-worker2 pod downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 11:02:29.521: INFO: Waiting for pod downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b to disappear Jul 23 11:02:29.531: INFO: Pod downward-api-fd98bcc7-ccd3-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:02:29.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cvfqs" for this suite. Jul 23 11:02:35.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:02:35.641: INFO: namespace: e2e-tests-downward-api-cvfqs, resource: bindings, ignored listing per whitelist Jul 23 11:02:35.664: INFO: namespace e2e-tests-downward-api-cvfqs deletion completed in 6.128672163s • [SLOW TEST:10.530 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:02:35.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 23 11:02:36.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dmltx' Jul 23 11:02:36.407: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 23 11:02:36.407: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jul 23 11:02:38.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-dmltx' Jul 23 11:02:39.006: INFO: stderr: "" Jul 23 11:02:39.006: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:02:39.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dmltx" for this suite. Jul 23 11:03:03.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:03:03.335: INFO: namespace: e2e-tests-kubectl-dmltx, resource: bindings, ignored listing per whitelist Jul 23 11:03:03.357: INFO: namespace e2e-tests-kubectl-dmltx deletion completed in 24.31985108s • [SLOW TEST:27.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:03:03.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 23 11:03:03.510: INFO: Waiting up to 5m0s for pod "pod-1457bf54-ccd4-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-4bgwt" to be "success or failure" Jul 23 11:03:03.570: INFO: Pod "pod-1457bf54-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.464813ms Jul 23 11:03:05.574: INFO: Pod "pod-1457bf54-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064576451s Jul 23 11:03:07.578: INFO: Pod "pod-1457bf54-ccd4-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068775767s STEP: Saw pod success Jul 23 11:03:07.578: INFO: Pod "pod-1457bf54-ccd4-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:03:07.581: INFO: Trying to get logs from node hunter-worker2 pod pod-1457bf54-ccd4-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:03:07.661: INFO: Waiting for pod pod-1457bf54-ccd4-11ea-92a5-0242ac11000b to disappear Jul 23 11:03:07.672: INFO: Pod pod-1457bf54-ccd4-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:03:07.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4bgwt" for this suite. Jul 23 11:03:13.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:03:13.786: INFO: namespace: e2e-tests-emptydir-4bgwt, resource: bindings, ignored listing per whitelist Jul 23 11:03:13.943: INFO: namespace e2e-tests-emptydir-4bgwt deletion completed in 6.266815923s • [SLOW TEST:10.585 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:03:13.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1ab1c516-ccd4-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:03:14.282: INFO: Waiting up to 5m0s for pod "pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-gkdjc" to be "success or failure" Jul 23 11:03:14.285: INFO: Pod "pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709731ms Jul 23 11:03:16.292: INFO: Pod "pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010307417s Jul 23 11:03:18.313: INFO: Pod "pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031303742s STEP: Saw pod success Jul 23 11:03:18.313: INFO: Pod "pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:03:18.315: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b container secret-volume-test: STEP: delete the pod Jul 23 11:03:18.460: INFO: Waiting for pod pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b to disappear Jul 23 11:03:18.469: INFO: Pod pod-secrets-1ac92b3b-ccd4-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:03:18.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gkdjc" for this suite. Jul 23 11:03:24.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:03:24.605: INFO: namespace: e2e-tests-secrets-gkdjc, resource: bindings, ignored listing per whitelist Jul 23 11:03:24.614: INFO: namespace e2e-tests-secrets-gkdjc deletion completed in 6.142660926s STEP: Destroying namespace "e2e-tests-secret-namespace-b4785" for this suite. Jul 23 11:03:30.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:03:30.698: INFO: namespace: e2e-tests-secret-namespace-b4785, resource: bindings, ignored listing per whitelist Jul 23 11:03:30.702: INFO: namespace e2e-tests-secret-namespace-b4785 deletion completed in 6.087765082s • [SLOW TEST:16.759 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:03:30.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5vplq Jul 23 11:03:36.865: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5vplq STEP: checking the pod's current state and verifying that restartCount is present Jul 23 11:03:36.868: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:07:37.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5vplq" for this suite. Jul 23 11:07:43.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:07:43.596: INFO: namespace: e2e-tests-container-probe-5vplq, resource: bindings, ignored listing per whitelist Jul 23 11:07:43.646: INFO: namespace e2e-tests-container-probe-5vplq deletion completed in 6.086574535s • [SLOW TEST:252.943 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:07:43.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 23 11:07:50.293: INFO: Successfully updated pod "annotationupdatebb68dce8-ccd4-11ea-92a5-0242ac11000b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:07:52.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-97kch" for this suite. Jul 23 11:08:30.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:08:30.362: INFO: namespace: e2e-tests-projected-97kch, resource: bindings, ignored listing per whitelist Jul 23 11:08:30.412: INFO: namespace e2e-tests-projected-97kch deletion completed in 38.098169866s • [SLOW TEST:46.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:08:30.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 23 11:08:37.074: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d7410d27-ccd4-11ea-92a5-0242ac11000b" Jul 23 11:08:37.074: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d7410d27-ccd4-11ea-92a5-0242ac11000b" in namespace "e2e-tests-pods-gdqmr" to be "terminated due to deadline exceeded" Jul 23 11:08:37.081: INFO: Pod "pod-update-activedeadlineseconds-d7410d27-ccd4-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 7.025118ms Jul 23 11:08:39.096: INFO: Pod "pod-update-activedeadlineseconds-d7410d27-ccd4-11ea-92a5-0242ac11000b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021231266s Jul 23 11:08:39.096: INFO: Pod "pod-update-activedeadlineseconds-d7410d27-ccd4-11ea-92a5-0242ac11000b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:08:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gdqmr" for this suite. Jul 23 11:08:45.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:08:45.668: INFO: namespace: e2e-tests-pods-gdqmr, resource: bindings, ignored listing per whitelist Jul 23 11:08:45.698: INFO: namespace e2e-tests-pods-gdqmr deletion completed in 6.598596898s • [SLOW TEST:15.285 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:08:45.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jul 23 11:08:45.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qtfwf run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 23 11:08:54.098: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0723 11:08:54.009939 358 log.go:172] (0xc000150790) (0xc000ac61e0) Create stream\nI0723 11:08:54.009992 358 log.go:172] (0xc000150790) (0xc000ac61e0) Stream added, broadcasting: 1\nI0723 11:08:54.013290 358 log.go:172] (0xc000150790) Reply frame received for 1\nI0723 11:08:54.013331 358 log.go:172] (0xc000150790) (0xc0008585a0) Create stream\nI0723 11:08:54.013343 358 log.go:172] (0xc000150790) (0xc0008585a0) Stream added, broadcasting: 3\nI0723 11:08:54.014347 358 log.go:172] (0xc000150790) Reply frame received for 3\nI0723 11:08:54.014446 358 log.go:172] (0xc000150790) (0xc0006a8000) Create stream\nI0723 11:08:54.014468 358 log.go:172] (0xc000150790) (0xc0006a8000) Stream added, broadcasting: 5\nI0723 11:08:54.015314 358 log.go:172] (0xc000150790) Reply frame received for 5\nI0723 11:08:54.015351 358 log.go:172] (0xc000150790) (0xc0006a80a0) Create stream\nI0723 11:08:54.015363 358 log.go:172] (0xc000150790) (0xc0006a80a0) Stream added, broadcasting: 7\nI0723 11:08:54.016103 358 log.go:172] (0xc000150790) Reply frame received for 7\nI0723 11:08:54.016318 358 log.go:172] (0xc0008585a0) (3) Writing data frame\nI0723 11:08:54.016466 358 log.go:172] (0xc0008585a0) (3) Writing data frame\nI0723 11:08:54.017550 358 log.go:172] (0xc000150790) Data frame received for 5\nI0723 11:08:54.017578 358 log.go:172] (0xc0006a8000) (5) Data frame handling\nI0723 11:08:54.017603 358 log.go:172] (0xc0006a8000) (5) Data frame sent\nI0723 11:08:54.018266 358 log.go:172] (0xc000150790) Data frame received for 5\nI0723 11:08:54.018284 358 log.go:172] (0xc0006a8000) (5) Data frame handling\nI0723 11:08:54.018300 358 log.go:172] (0xc0006a8000) (5) Data frame sent\nI0723 11:08:54.071645 358 log.go:172] (0xc000150790) Data frame received for 7\nI0723 11:08:54.071700 358 log.go:172] (0xc0006a80a0) (7) Data frame handling\nI0723 11:08:54.071730 358 log.go:172] (0xc000150790) Data frame received for 5\nI0723 11:08:54.071744 358 log.go:172] (0xc0006a8000) (5) Data frame handling\nI0723 11:08:54.072509 358 log.go:172] (0xc000150790) Data frame received for 1\nI0723 11:08:54.072562 358 log.go:172] (0xc000150790) (0xc0008585a0) Stream removed, broadcasting: 3\nI0723 11:08:54.072605 358 log.go:172] (0xc000ac61e0) (1) Data frame handling\nI0723 11:08:54.072650 358 log.go:172] (0xc000ac61e0) (1) Data frame sent\nI0723 11:08:54.072670 358 log.go:172] (0xc000150790) (0xc000ac61e0) Stream removed, broadcasting: 1\nI0723 11:08:54.072795 358 log.go:172] (0xc000150790) Go away received\nI0723 11:08:54.072950 358 log.go:172] (0xc000150790) (0xc000ac61e0) Stream removed, broadcasting: 1\nI0723 11:08:54.073003 358 log.go:172] (0xc000150790) (0xc0008585a0) Stream removed, broadcasting: 3\nI0723 11:08:54.073022 358 log.go:172] (0xc000150790) (0xc0006a8000) Stream removed, broadcasting: 5\nI0723 11:08:54.073039 358 log.go:172] (0xc000150790) (0xc0006a80a0) Stream removed, broadcasting: 7\n" Jul 23 11:08:54.098: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:08:56.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qtfwf" for this suite. Jul 23 11:09:02.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:09:02.188: INFO: namespace: e2e-tests-kubectl-qtfwf, resource: bindings, ignored listing per whitelist Jul 23 11:09:02.188: INFO: namespace e2e-tests-kubectl-qtfwf deletion completed in 6.078634287s • [SLOW TEST:16.491 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:09:02.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-gj62h/configmap-test-ea3b6001-ccd4-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:09:02.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-gj62h" to be "success or failure" Jul 23 11:09:02.341: INFO: Pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.553037ms Jul 23 11:09:04.345: INFO: Pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022357065s Jul 23 11:09:06.349: INFO: Pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.026012771s Jul 23 11:09:08.352: INFO: Pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029477975s STEP: Saw pod success Jul 23 11:09:08.352: INFO: Pod "pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:09:08.354: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b container env-test: STEP: delete the pod Jul 23 11:09:08.462: INFO: Waiting for pod pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b to disappear Jul 23 11:09:08.526: INFO: Pod pod-configmaps-ea3c0c88-ccd4-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:09:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gj62h" for this suite. Jul 23 11:09:14.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:09:14.720: INFO: namespace: e2e-tests-configmap-gj62h, resource: bindings, ignored listing per whitelist Jul 23 11:09:14.764: INFO: namespace e2e-tests-configmap-gj62h deletion completed in 6.234486792s • [SLOW TEST:12.576 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:09:14.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jul 23 11:09:22.058: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-f1e2a04f-ccd4-11ea-92a5-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-pods-7lhdx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-7lhdx/pods/pod-submit-remove-f1e2a04f-ccd4-11ea-92a5-0242ac11000b", UID:"f1e8f0b7-ccd4-11ea-b2c9-0242ac120008", ResourceVersion:"2350020", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731099355, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"151580553"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rn2dn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00222a800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rn2dn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002415a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b26de0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002415aa0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002415ac0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002415ac8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002415acc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731099355, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731099359, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731099359, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731099355, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.10", StartTime:(*v1.Time)(0xc001c40c00), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001c40c20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://13c8b8ce9bab82a961dc21ab9650b764e8afeda0ddcd7351e14d486450f4e15b"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:09:28.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7lhdx" for this suite. Jul 23 11:09:34.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:09:35.008: INFO: namespace: e2e-tests-pods-7lhdx, resource: bindings, ignored listing per whitelist Jul 23 11:09:35.031: INFO: namespace e2e-tests-pods-7lhdx deletion completed in 7.005732341s • [SLOW TEST:20.267 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:09:35.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-fdd1d030-ccd4-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:09:35.186: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-nlqrs" to be "success or failure" Jul 23 11:09:35.209: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.468058ms Jul 23 11:09:37.212: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025508886s Jul 23 11:09:40.802: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.615670265s Jul 23 11:09:43.060: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.873515944s Jul 23 11:09:46.129: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.942673542s Jul 23 11:09:48.133: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.946607349s STEP: Saw pod success Jul 23 11:09:48.133: INFO: Pod "pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:09:48.136: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b container projected-secret-volume-test: STEP: delete the pod Jul 23 11:09:49.653: INFO: Waiting for pod pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b to disappear Jul 23 11:09:49.670: INFO: Pod pod-projected-secrets-fdd28811-ccd4-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:09:49.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nlqrs" for this suite. Jul 23 11:09:55.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:09:55.876: INFO: namespace: e2e-tests-projected-nlqrs, resource: bindings, ignored listing per whitelist Jul 23 11:09:55.899: INFO: namespace e2e-tests-projected-nlqrs deletion completed in 6.192086519s • [SLOW TEST:20.868 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:09:55.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 23 11:09:57.036: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:10:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-phz29" for this suite. Jul 23 11:10:17.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:10:17.220: INFO: namespace: e2e-tests-init-container-phz29, resource: bindings, ignored listing per whitelist Jul 23 11:10:17.611: INFO: namespace e2e-tests-init-container-phz29 deletion completed in 6.837345481s • [SLOW TEST:21.712 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:10:17.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:11:17.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-h8rbg" for this suite. Jul 23 11:11:40.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:11:40.255: INFO: namespace: e2e-tests-container-probe-h8rbg, resource: bindings, ignored listing per whitelist Jul 23 11:11:40.325: INFO: namespace e2e-tests-container-probe-h8rbg deletion completed in 22.47752407s • [SLOW TEST:82.714 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:11:40.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 23 11:11:44.534: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-488403f0-ccd5-11ea-92a5-0242ac11000b,GenerateName:,Namespace:e2e-tests-events-s4hp5,SelfLink:/api/v1/namespaces/e2e-tests-events-s4hp5/pods/send-events-488403f0-ccd5-11ea-92a5-0242ac11000b,UID:4885ce4d-ccd5-11ea-b2c9-0242ac120008,ResourceVersion:2350391,Generation:0,CreationTimestamp:2020-07-23 11:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 493315917,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hr8zs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hr8zs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-hr8zs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a3e1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a3e200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:11:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:11:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:11:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:11:40 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.13,StartTime:2020-07-23 11:11:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-23 11:11:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://c43bb3d758f15519ed5a843f0b8e385cc5ba093d89b2b534ab198165ab325cec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 23 11:11:46.538: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 23 11:11:48.541: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:11:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-s4hp5" for this suite. Jul 23 11:12:26.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:12:26.616: INFO: namespace: e2e-tests-events-s4hp5, resource: bindings, ignored listing per whitelist Jul 23 11:12:26.657: INFO: namespace e2e-tests-events-s4hp5 deletion completed in 38.091208988s • [SLOW TEST:46.332 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:12:26.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-6415b734-ccd5-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:12:26.794: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-zrb25" to be "success or failure" Jul 23 11:12:26.817: INFO: Pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.113898ms Jul 23 11:12:29.118: INFO: Pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323818241s Jul 23 11:12:31.764: INFO: Pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.96999302s Jul 23 11:12:33.788: INFO: Pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.993682336s STEP: Saw pod success Jul 23 11:12:33.788: INFO: Pod "pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:12:33.790: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b container projected-secret-volume-test: STEP: delete the pod Jul 23 11:12:33.861: INFO: Waiting for pod pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b to disappear Jul 23 11:12:33.961: INFO: Pod pod-projected-secrets-641bebe2-ccd5-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:12:33.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zrb25" for this suite. Jul 23 11:12:41.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:12:41.523: INFO: namespace: e2e-tests-projected-zrb25, resource: bindings, ignored listing per whitelist Jul 23 11:12:41.574: INFO: namespace e2e-tests-projected-zrb25 deletion completed in 7.587594087s • [SLOW TEST:14.916 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:12:41.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-6d727ce5-ccd5-11ea-92a5-0242ac11000b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-6d727ce5-ccd5-11ea-92a5-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:14:13.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cv45b" for this suite. Jul 23 11:14:38.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:14:39.042: INFO: namespace: e2e-tests-projected-cv45b, resource: bindings, ignored listing per whitelist Jul 23 11:14:39.119: INFO: namespace e2e-tests-projected-cv45b deletion completed in 25.982166655s • [SLOW TEST:117.545 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:14:39.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 23 11:14:56.627: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:56.627: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:56.661529 6 log.go:172] (0xc00184c4d0) (0xc001b31d60) Create stream I0723 11:14:56.661545 6 log.go:172] (0xc00184c4d0) (0xc001b31d60) Stream added, broadcasting: 1 I0723 11:14:56.663817 6 log.go:172] (0xc00184c4d0) Reply frame received for 1 I0723 11:14:56.663849 6 log.go:172] (0xc00184c4d0) (0xc0014dad20) Create stream I0723 11:14:56.663863 6 log.go:172] (0xc00184c4d0) (0xc0014dad20) Stream added, broadcasting: 3 I0723 11:14:56.664812 6 log.go:172] (0xc00184c4d0) Reply frame received for 3 I0723 11:14:56.664873 6 log.go:172] (0xc00184c4d0) (0xc001b31e00) Create stream I0723 11:14:56.664899 6 log.go:172] (0xc00184c4d0) (0xc001b31e00) Stream added, broadcasting: 5 I0723 11:14:56.665724 6 log.go:172] (0xc00184c4d0) Reply frame received for 5 I0723 11:14:56.734570 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:14:56.734602 6 log.go:172] (0xc0014dad20) (3) Data frame handling I0723 11:14:56.734646 6 log.go:172] (0xc0014dad20) (3) Data frame sent I0723 11:14:56.734663 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:14:56.734673 6 log.go:172] (0xc0014dad20) (3) Data frame handling I0723 11:14:56.734816 6 log.go:172] (0xc00184c4d0) Data frame received for 5 I0723 11:14:56.734845 6 log.go:172] (0xc001b31e00) (5) Data frame handling I0723 11:14:56.735686 6 log.go:172] (0xc00184c4d0) Data frame received for 1 I0723 11:14:56.735705 6 log.go:172] (0xc001b31d60) (1) Data frame handling I0723 11:14:56.735729 6 log.go:172] (0xc001b31d60) (1) Data frame sent I0723 11:14:56.735744 6 log.go:172] (0xc00184c4d0) (0xc001b31d60) Stream removed, broadcasting: 1 I0723 11:14:56.735836 6 log.go:172] (0xc00184c4d0) (0xc001b31d60) Stream removed, broadcasting: 1 I0723 11:14:56.735860 6 log.go:172] (0xc00184c4d0) (0xc0014dad20) Stream removed, broadcasting: 3 I0723 11:14:56.735901 6 log.go:172] (0xc00184c4d0) Go away received I0723 11:14:56.736037 6 log.go:172] (0xc00184c4d0) (0xc001b31e00) Stream removed, broadcasting: 5 Jul 23 11:14:56.736: INFO: Exec stderr: "" Jul 23 11:14:56.736: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:56.736: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:56.768661 6 log.go:172] (0xc00184c9a0) (0xc001cbe140) Create stream I0723 11:14:56.768685 6 log.go:172] (0xc00184c9a0) (0xc001cbe140) Stream added, broadcasting: 1 I0723 11:14:56.771218 6 log.go:172] (0xc00184c9a0) Reply frame received for 1 I0723 11:14:56.771250 6 log.go:172] (0xc00184c9a0) (0xc0018b2640) Create stream I0723 11:14:56.771258 6 log.go:172] (0xc00184c9a0) (0xc0018b2640) Stream added, broadcasting: 3 I0723 11:14:56.772166 6 log.go:172] (0xc00184c9a0) Reply frame received for 3 I0723 11:14:56.772206 6 log.go:172] (0xc00184c9a0) (0xc0018b26e0) Create stream I0723 11:14:56.772219 6 log.go:172] (0xc00184c9a0) (0xc0018b26e0) Stream added, broadcasting: 5 I0723 11:14:56.773441 6 log.go:172] (0xc00184c9a0) Reply frame received for 5 I0723 11:14:56.828639 6 log.go:172] (0xc00184c9a0) Data frame received for 5 I0723 11:14:56.828693 6 log.go:172] (0xc0018b26e0) (5) Data frame handling I0723 11:14:56.828782 6 log.go:172] (0xc00184c9a0) Data frame received for 3 I0723 11:14:56.828814 6 log.go:172] (0xc0018b2640) (3) Data frame handling I0723 11:14:56.828837 6 log.go:172] (0xc0018b2640) (3) Data frame sent I0723 11:14:56.828850 6 log.go:172] (0xc00184c9a0) Data frame received for 3 I0723 11:14:56.828859 6 log.go:172] (0xc0018b2640) (3) Data frame handling I0723 11:14:56.830210 6 log.go:172] (0xc00184c9a0) Data frame received for 1 I0723 11:14:56.830271 6 log.go:172] (0xc001cbe140) (1) Data frame handling I0723 11:14:56.830315 6 log.go:172] (0xc001cbe140) (1) Data frame sent I0723 11:14:56.830336 6 log.go:172] (0xc00184c9a0) (0xc001cbe140) Stream removed, broadcasting: 1 I0723 11:14:56.830351 6 log.go:172] (0xc00184c9a0) Go away received I0723 11:14:56.830480 6 log.go:172] (0xc00184c9a0) (0xc001cbe140) Stream removed, broadcasting: 1 I0723 11:14:56.830500 6 log.go:172] (0xc00184c9a0) (0xc0018b2640) Stream removed, broadcasting: 3 I0723 11:14:56.830512 6 log.go:172] (0xc00184c9a0) (0xc0018b26e0) Stream removed, broadcasting: 5 Jul 23 11:14:56.830: INFO: Exec stderr: "" Jul 23 11:14:56.830: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:56.830: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:56.864259 6 log.go:172] (0xc00186a4d0) (0xc0018b2960) Create stream I0723 11:14:56.864290 6 log.go:172] (0xc00186a4d0) (0xc0018b2960) Stream added, broadcasting: 1 I0723 11:14:56.866430 6 log.go:172] (0xc00186a4d0) Reply frame received for 1 I0723 11:14:56.866457 6 log.go:172] (0xc00186a4d0) (0xc0005d79a0) Create stream I0723 11:14:56.866470 6 log.go:172] (0xc00186a4d0) (0xc0005d79a0) Stream added, broadcasting: 3 I0723 11:14:56.867222 6 log.go:172] (0xc00186a4d0) Reply frame received for 3 I0723 11:14:56.867256 6 log.go:172] (0xc00186a4d0) (0xc0018b2a00) Create stream I0723 11:14:56.867272 6 log.go:172] (0xc00186a4d0) (0xc0018b2a00) Stream added, broadcasting: 5 I0723 11:14:56.867984 6 log.go:172] (0xc00186a4d0) Reply frame received for 5 I0723 11:14:56.938769 6 log.go:172] (0xc00186a4d0) Data frame received for 5 I0723 11:14:56.938791 6 log.go:172] (0xc0018b2a00) (5) Data frame handling I0723 11:14:56.938834 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 11:14:56.938869 6 log.go:172] (0xc0005d79a0) (3) Data frame handling I0723 11:14:56.938886 6 log.go:172] (0xc0005d79a0) (3) Data frame sent I0723 11:14:56.938896 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 11:14:56.938905 6 log.go:172] (0xc0005d79a0) (3) Data frame handling I0723 11:14:56.940032 6 log.go:172] (0xc00186a4d0) Data frame received for 1 I0723 11:14:56.940057 6 log.go:172] (0xc0018b2960) (1) Data frame handling I0723 11:14:56.940083 6 log.go:172] (0xc0018b2960) (1) Data frame sent I0723 11:14:56.940101 6 log.go:172] (0xc00186a4d0) (0xc0018b2960) Stream removed, broadcasting: 1 I0723 11:14:56.940180 6 log.go:172] (0xc00186a4d0) Go away received I0723 11:14:56.940219 6 log.go:172] (0xc00186a4d0) (0xc0018b2960) Stream removed, broadcasting: 1 I0723 11:14:56.940239 6 log.go:172] (0xc00186a4d0) (0xc0005d79a0) Stream removed, broadcasting: 3 I0723 11:14:56.940251 6 log.go:172] (0xc00186a4d0) (0xc0018b2a00) Stream removed, broadcasting: 5 Jul 23 11:14:56.940: INFO: Exec stderr: "" Jul 23 11:14:56.940: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:56.940: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:56.964657 6 log.go:172] (0xc000e1b340) (0xc0014db040) Create stream I0723 11:14:56.964686 6 log.go:172] (0xc000e1b340) (0xc0014db040) Stream added, broadcasting: 1 I0723 11:14:56.966524 6 log.go:172] (0xc000e1b340) Reply frame received for 1 I0723 11:14:56.966553 6 log.go:172] (0xc000e1b340) (0xc0014db0e0) Create stream I0723 11:14:56.966564 6 log.go:172] (0xc000e1b340) (0xc0014db0e0) Stream added, broadcasting: 3 I0723 11:14:56.967321 6 log.go:172] (0xc000e1b340) Reply frame received for 3 I0723 11:14:56.967338 6 log.go:172] (0xc000e1b340) (0xc0014db180) Create stream I0723 11:14:56.967346 6 log.go:172] (0xc000e1b340) (0xc0014db180) Stream added, broadcasting: 5 I0723 11:14:56.967905 6 log.go:172] (0xc000e1b340) Reply frame received for 5 I0723 11:14:57.018500 6 log.go:172] (0xc000e1b340) Data frame received for 5 I0723 11:14:57.018609 6 log.go:172] (0xc0014db180) (5) Data frame handling I0723 11:14:57.018691 6 log.go:172] (0xc000e1b340) Data frame received for 3 I0723 11:14:57.018780 6 log.go:172] (0xc0014db0e0) (3) Data frame handling I0723 11:14:57.018827 6 log.go:172] (0xc0014db0e0) (3) Data frame sent I0723 11:14:57.018888 6 log.go:172] (0xc000e1b340) Data frame received for 3 I0723 11:14:57.018918 6 log.go:172] (0xc0014db0e0) (3) Data frame handling I0723 11:14:57.020218 6 log.go:172] (0xc000e1b340) Data frame received for 1 I0723 11:14:57.020292 6 log.go:172] (0xc0014db040) (1) Data frame handling I0723 11:14:57.020328 6 log.go:172] (0xc0014db040) (1) Data frame sent I0723 11:14:57.020357 6 log.go:172] (0xc000e1b340) (0xc0014db040) Stream removed, broadcasting: 1 I0723 11:14:57.020476 6 log.go:172] (0xc000e1b340) (0xc0014db040) Stream removed, broadcasting: 1 I0723 11:14:57.020513 6 log.go:172] (0xc000e1b340) (0xc0014db0e0) Stream removed, broadcasting: 3 I0723 11:14:57.020537 6 log.go:172] (0xc000e1b340) (0xc0014db180) Stream removed, broadcasting: 5 Jul 23 11:14:57.020: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0723 11:14:57.020856 6 log.go:172] (0xc000e1b340) Go away received Jul 23 11:14:57.020: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.020: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.048234 6 log.go:172] (0xc0011d62c0) (0xc001af40a0) Create stream I0723 11:14:57.048264 6 log.go:172] (0xc0011d62c0) (0xc001af40a0) Stream added, broadcasting: 1 I0723 11:14:57.050233 6 log.go:172] (0xc0011d62c0) Reply frame received for 1 I0723 11:14:57.050258 6 log.go:172] (0xc0011d62c0) (0xc001af4140) Create stream I0723 11:14:57.050266 6 log.go:172] (0xc0011d62c0) (0xc001af4140) Stream added, broadcasting: 3 I0723 11:14:57.050782 6 log.go:172] (0xc0011d62c0) Reply frame received for 3 I0723 11:14:57.050801 6 log.go:172] (0xc0011d62c0) (0xc0014db220) Create stream I0723 11:14:57.050808 6 log.go:172] (0xc0011d62c0) (0xc0014db220) Stream added, broadcasting: 5 I0723 11:14:57.051343 6 log.go:172] (0xc0011d62c0) Reply frame received for 5 I0723 11:14:57.087826 6 log.go:172] (0xc0011d62c0) Data frame received for 5 I0723 11:14:57.087849 6 log.go:172] (0xc0011d62c0) Data frame received for 3 I0723 11:14:57.087873 6 log.go:172] (0xc001af4140) (3) Data frame handling I0723 11:14:57.087886 6 log.go:172] (0xc001af4140) (3) Data frame sent I0723 11:14:57.087893 6 log.go:172] (0xc0011d62c0) Data frame received for 3 I0723 11:14:57.087902 6 log.go:172] (0xc001af4140) (3) Data frame handling I0723 11:14:57.087930 6 log.go:172] (0xc0014db220) (5) Data frame handling I0723 11:14:57.088928 6 log.go:172] (0xc0011d62c0) Data frame received for 1 I0723 11:14:57.088949 6 log.go:172] (0xc001af40a0) (1) Data frame handling I0723 11:14:57.088961 6 log.go:172] (0xc001af40a0) (1) Data frame sent I0723 11:14:57.088975 6 log.go:172] (0xc0011d62c0) (0xc001af40a0) Stream removed, broadcasting: 1 I0723 11:14:57.088999 6 log.go:172] (0xc0011d62c0) Go away received I0723 11:14:57.089058 6 log.go:172] (0xc0011d62c0) (0xc001af40a0) Stream removed, broadcasting: 1 I0723 11:14:57.089068 6 log.go:172] (0xc0011d62c0) (0xc001af4140) Stream removed, broadcasting: 3 I0723 11:14:57.089072 6 log.go:172] (0xc0011d62c0) (0xc0014db220) Stream removed, broadcasting: 5 Jul 23 11:14:57.089: INFO: Exec stderr: "" Jul 23 11:14:57.089: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.089: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.110742 6 log.go:172] (0xc0011d6790) (0xc001af4500) Create stream I0723 11:14:57.110784 6 log.go:172] (0xc0011d6790) (0xc001af4500) Stream added, broadcasting: 1 I0723 11:14:57.112694 6 log.go:172] (0xc0011d6790) Reply frame received for 1 I0723 11:14:57.112859 6 log.go:172] (0xc0011d6790) (0xc0014db360) Create stream I0723 11:14:57.112874 6 log.go:172] (0xc0011d6790) (0xc0014db360) Stream added, broadcasting: 3 I0723 11:14:57.113643 6 log.go:172] (0xc0011d6790) Reply frame received for 3 I0723 11:14:57.113676 6 log.go:172] (0xc0011d6790) (0xc000be4640) Create stream I0723 11:14:57.113692 6 log.go:172] (0xc0011d6790) (0xc000be4640) Stream added, broadcasting: 5 I0723 11:14:57.114451 6 log.go:172] (0xc0011d6790) Reply frame received for 5 I0723 11:14:57.174460 6 log.go:172] (0xc0011d6790) Data frame received for 5 I0723 11:14:57.174483 6 log.go:172] (0xc000be4640) (5) Data frame handling I0723 11:14:57.174498 6 log.go:172] (0xc0011d6790) Data frame received for 3 I0723 11:14:57.174503 6 log.go:172] (0xc0014db360) (3) Data frame handling I0723 11:14:57.174511 6 log.go:172] (0xc0014db360) (3) Data frame sent I0723 11:14:57.174517 6 log.go:172] (0xc0011d6790) Data frame received for 3 I0723 11:14:57.174522 6 log.go:172] (0xc0014db360) (3) Data frame handling I0723 11:14:57.175158 6 log.go:172] (0xc0011d6790) Data frame received for 1 I0723 11:14:57.175175 6 log.go:172] (0xc001af4500) (1) Data frame handling I0723 11:14:57.175186 6 log.go:172] (0xc001af4500) (1) Data frame sent I0723 11:14:57.175200 6 log.go:172] (0xc0011d6790) (0xc001af4500) Stream removed, broadcasting: 1 I0723 11:14:57.175214 6 log.go:172] (0xc0011d6790) Go away received I0723 11:14:57.175313 6 log.go:172] (0xc0011d6790) (0xc001af4500) Stream removed, broadcasting: 1 I0723 11:14:57.175324 6 log.go:172] (0xc0011d6790) (0xc0014db360) Stream removed, broadcasting: 3 I0723 11:14:57.175330 6 log.go:172] (0xc0011d6790) (0xc000be4640) Stream removed, broadcasting: 5 Jul 23 11:14:57.175: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 23 11:14:57.175: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.175: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.195964 6 log.go:172] (0xc00186a9a0) (0xc0018b2d20) Create stream I0723 11:14:57.195986 6 log.go:172] (0xc00186a9a0) (0xc0018b2d20) Stream added, broadcasting: 1 I0723 11:14:57.197290 6 log.go:172] (0xc00186a9a0) Reply frame received for 1 I0723 11:14:57.197309 6 log.go:172] (0xc00186a9a0) (0xc001cbe1e0) Create stream I0723 11:14:57.197319 6 log.go:172] (0xc00186a9a0) (0xc001cbe1e0) Stream added, broadcasting: 3 I0723 11:14:57.198013 6 log.go:172] (0xc00186a9a0) Reply frame received for 3 I0723 11:14:57.198046 6 log.go:172] (0xc00186a9a0) (0xc001af45a0) Create stream I0723 11:14:57.198069 6 log.go:172] (0xc00186a9a0) (0xc001af45a0) Stream added, broadcasting: 5 I0723 11:14:57.198798 6 log.go:172] (0xc00186a9a0) Reply frame received for 5 I0723 11:14:57.251045 6 log.go:172] (0xc00186a9a0) Data frame received for 3 I0723 11:14:57.251075 6 log.go:172] (0xc001cbe1e0) (3) Data frame handling I0723 11:14:57.251085 6 log.go:172] (0xc001cbe1e0) (3) Data frame sent I0723 11:14:57.251092 6 log.go:172] (0xc00186a9a0) Data frame received for 3 I0723 11:14:57.251098 6 log.go:172] (0xc001cbe1e0) (3) Data frame handling I0723 11:14:57.251124 6 log.go:172] (0xc00186a9a0) Data frame received for 5 I0723 11:14:57.251134 6 log.go:172] (0xc001af45a0) (5) Data frame handling I0723 11:14:57.252836 6 log.go:172] (0xc00186a9a0) Data frame received for 1 I0723 11:14:57.252850 6 log.go:172] (0xc0018b2d20) (1) Data frame handling I0723 11:14:57.252859 6 log.go:172] (0xc0018b2d20) (1) Data frame sent I0723 11:14:57.253037 6 log.go:172] (0xc00186a9a0) (0xc0018b2d20) Stream removed, broadcasting: 1 I0723 11:14:57.253100 6 log.go:172] (0xc00186a9a0) (0xc0018b2d20) Stream removed, broadcasting: 1 I0723 11:14:57.253126 6 log.go:172] (0xc00186a9a0) (0xc001cbe1e0) Stream removed, broadcasting: 3 I0723 11:14:57.253194 6 log.go:172] (0xc00186a9a0) Go away received I0723 11:14:57.253230 6 log.go:172] (0xc00186a9a0) (0xc001af45a0) Stream removed, broadcasting: 5 Jul 23 11:14:57.253: INFO: Exec stderr: "" Jul 23 11:14:57.253: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.253: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.276700 6 log.go:172] (0xc000e1b810) (0xc0014db5e0) Create stream I0723 11:14:57.276763 6 log.go:172] (0xc000e1b810) (0xc0014db5e0) Stream added, broadcasting: 1 I0723 11:14:57.278661 6 log.go:172] (0xc000e1b810) Reply frame received for 1 I0723 11:14:57.278716 6 log.go:172] (0xc000e1b810) (0xc001cbe280) Create stream I0723 11:14:57.278723 6 log.go:172] (0xc000e1b810) (0xc001cbe280) Stream added, broadcasting: 3 I0723 11:14:57.279510 6 log.go:172] (0xc000e1b810) Reply frame received for 3 I0723 11:14:57.279554 6 log.go:172] (0xc000e1b810) (0xc0018b2dc0) Create stream I0723 11:14:57.279573 6 log.go:172] (0xc000e1b810) (0xc0018b2dc0) Stream added, broadcasting: 5 I0723 11:14:57.280368 6 log.go:172] (0xc000e1b810) Reply frame received for 5 I0723 11:14:57.315014 6 log.go:172] (0xc000e1b810) Data frame received for 5 I0723 11:14:57.315049 6 log.go:172] (0xc0018b2dc0) (5) Data frame handling I0723 11:14:57.315075 6 log.go:172] (0xc000e1b810) Data frame received for 3 I0723 11:14:57.315087 6 log.go:172] (0xc001cbe280) (3) Data frame handling I0723 11:14:57.315100 6 log.go:172] (0xc001cbe280) (3) Data frame sent I0723 11:14:57.315112 6 log.go:172] (0xc000e1b810) Data frame received for 3 I0723 11:14:57.315124 6 log.go:172] (0xc001cbe280) (3) Data frame handling I0723 11:14:57.315759 6 log.go:172] (0xc000e1b810) Data frame received for 1 I0723 11:14:57.315774 6 log.go:172] (0xc0014db5e0) (1) Data frame handling I0723 11:14:57.315782 6 log.go:172] (0xc0014db5e0) (1) Data frame sent I0723 11:14:57.315795 6 log.go:172] (0xc000e1b810) (0xc0014db5e0) Stream removed, broadcasting: 1 I0723 11:14:57.315805 6 log.go:172] (0xc000e1b810) Go away received I0723 11:14:57.315923 6 log.go:172] (0xc000e1b810) (0xc0014db5e0) Stream removed, broadcasting: 1 I0723 11:14:57.315946 6 log.go:172] (0xc000e1b810) (0xc001cbe280) Stream removed, broadcasting: 3 I0723 11:14:57.315961 6 log.go:172] (0xc000e1b810) (0xc0018b2dc0) Stream removed, broadcasting: 5 Jul 23 11:14:57.315: INFO: Exec stderr: "" Jul 23 11:14:57.315: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.316: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.338635 6 log.go:172] (0xc00186ae70) (0xc0018b3220) Create stream I0723 11:14:57.338661 6 log.go:172] (0xc00186ae70) (0xc0018b3220) Stream added, broadcasting: 1 I0723 11:14:57.340009 6 log.go:172] (0xc00186ae70) Reply frame received for 1 I0723 11:14:57.340040 6 log.go:172] (0xc00186ae70) (0xc000be4780) Create stream I0723 11:14:57.340049 6 log.go:172] (0xc00186ae70) (0xc000be4780) Stream added, broadcasting: 3 I0723 11:14:57.340699 6 log.go:172] (0xc00186ae70) Reply frame received for 3 I0723 11:14:57.340719 6 log.go:172] (0xc00186ae70) (0xc001cbe320) Create stream I0723 11:14:57.340781 6 log.go:172] (0xc00186ae70) (0xc001cbe320) Stream added, broadcasting: 5 I0723 11:14:57.341360 6 log.go:172] (0xc00186ae70) Reply frame received for 5 I0723 11:14:57.395373 6 log.go:172] (0xc00186ae70) Data frame received for 5 I0723 11:14:57.395389 6 log.go:172] (0xc001cbe320) (5) Data frame handling I0723 11:14:57.395417 6 log.go:172] (0xc00186ae70) Data frame received for 3 I0723 11:14:57.395442 6 log.go:172] (0xc000be4780) (3) Data frame handling I0723 11:14:57.395454 6 log.go:172] (0xc000be4780) (3) Data frame sent I0723 11:14:57.395464 6 log.go:172] (0xc00186ae70) Data frame received for 3 I0723 11:14:57.395471 6 log.go:172] (0xc000be4780) (3) Data frame handling I0723 11:14:57.396078 6 log.go:172] (0xc00186ae70) Data frame received for 1 I0723 11:14:57.396092 6 log.go:172] (0xc0018b3220) (1) Data frame handling I0723 11:14:57.396104 6 log.go:172] (0xc0018b3220) (1) Data frame sent I0723 11:14:57.396114 6 log.go:172] (0xc00186ae70) (0xc0018b3220) Stream removed, broadcasting: 1 I0723 11:14:57.396129 6 log.go:172] (0xc00186ae70) Go away received I0723 11:14:57.396233 6 log.go:172] (0xc00186ae70) (0xc0018b3220) Stream removed, broadcasting: 1 I0723 11:14:57.396244 6 log.go:172] (0xc00186ae70) (0xc000be4780) Stream removed, broadcasting: 3 I0723 11:14:57.396251 6 log.go:172] (0xc00186ae70) (0xc001cbe320) Stream removed, broadcasting: 5 Jul 23 11:14:57.396: INFO: Exec stderr: "" Jul 23 11:14:57.396: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nn99l PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:14:57.396: INFO: >>> kubeConfig: /root/.kube/config I0723 11:14:57.420171 6 log.go:172] (0xc0011d6c60) (0xc001af4820) Create stream I0723 11:14:57.420190 6 log.go:172] (0xc0011d6c60) (0xc001af4820) Stream added, broadcasting: 1 I0723 11:14:57.426886 6 log.go:172] (0xc0011d6c60) Reply frame received for 1 I0723 11:14:57.426914 6 log.go:172] (0xc0011d6c60) (0xc001b30000) Create stream I0723 11:14:57.426923 6 log.go:172] (0xc0011d6c60) (0xc001b30000) Stream added, broadcasting: 3 I0723 11:14:57.427610 6 log.go:172] (0xc0011d6c60) Reply frame received for 3 I0723 11:14:57.427645 6 log.go:172] (0xc0011d6c60) (0xc0013940a0) Create stream I0723 11:14:57.427657 6 log.go:172] (0xc0011d6c60) (0xc0013940a0) Stream added, broadcasting: 5 I0723 11:14:57.428325 6 log.go:172] (0xc0011d6c60) Reply frame received for 5 I0723 11:14:57.480479 6 log.go:172] (0xc0011d6c60) Data frame received for 5 I0723 11:14:57.480507 6 log.go:172] (0xc0013940a0) (5) Data frame handling I0723 11:14:57.480552 6 log.go:172] (0xc0011d6c60) Data frame received for 3 I0723 11:14:57.480587 6 log.go:172] (0xc001b30000) (3) Data frame handling I0723 11:14:57.480614 6 log.go:172] (0xc001b30000) (3) Data frame sent I0723 11:14:57.480629 6 log.go:172] (0xc0011d6c60) Data frame received for 3 I0723 11:14:57.480638 6 log.go:172] (0xc001b30000) (3) Data frame handling I0723 11:14:57.481604 6 log.go:172] (0xc0011d6c60) Data frame received for 1 I0723 11:14:57.481629 6 log.go:172] (0xc001af4820) (1) Data frame handling I0723 11:14:57.481647 6 log.go:172] (0xc001af4820) (1) Data frame sent I0723 11:14:57.481667 6 log.go:172] (0xc0011d6c60) (0xc001af4820) Stream removed, broadcasting: 1 I0723 11:14:57.481759 6 log.go:172] (0xc0011d6c60) (0xc001af4820) Stream removed, broadcasting: 1 I0723 11:14:57.481780 6 log.go:172] (0xc0011d6c60) (0xc001b30000) Stream removed, broadcasting: 3 I0723 11:14:57.481799 6 log.go:172] (0xc0011d6c60) (0xc0013940a0) Stream removed, broadcasting: 5 Jul 23 11:14:57.481: INFO: Exec stderr: "" I0723 11:14:57.481828 6 log.go:172] (0xc0011d6c60) Go away received [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:14:57.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-nn99l" for this suite. Jul 23 11:15:43.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:15:43.543: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-nn99l, resource: bindings, ignored listing per whitelist Jul 23 11:15:43.558: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-nn99l deletion completed in 46.074006776s • [SLOW TEST:64.439 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:15:43.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jul 23 11:15:44.196: INFO: Waiting up to 5m0s for pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w" in namespace "e2e-tests-svcaccounts-mdpfr" to be "success or failure" Jul 23 11:15:44.205: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.873076ms Jul 23 11:15:46.209: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012820078s Jul 23 11:15:48.394: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198623203s Jul 23 11:15:50.399: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202993434s Jul 23 11:15:52.577: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381130316s Jul 23 11:15:54.580: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.384384064s Jul 23 11:15:56.583: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.387392191s Jul 23 11:15:58.634: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.43805791s Jul 23 11:16:00.637: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Running", Reason="", readiness=false. Elapsed: 16.441275937s Jul 23 11:16:02.641: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.445204532s STEP: Saw pod success Jul 23 11:16:02.641: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w" satisfied condition "success or failure" Jul 23 11:16:02.643: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w container token-test: STEP: delete the pod Jul 23 11:16:04.413: INFO: Waiting for pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w to disappear Jul 23 11:16:04.416: INFO: Pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-79p4w no longer exists STEP: Creating a pod to test consume service account root CA Jul 23 11:16:04.461: INFO: Waiting up to 5m0s for pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r" in namespace "e2e-tests-svcaccounts-mdpfr" to be "success or failure" Jul 23 11:16:04.875: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 413.03102ms Jul 23 11:16:06.952: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490228442s Jul 23 11:16:08.955: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493343575s Jul 23 11:16:10.957: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.495611463s Jul 23 11:16:13.713: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25101245s Jul 23 11:16:15.716: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 11.25465872s Jul 23 11:16:17.948: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 13.486640063s Jul 23 11:16:19.952: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.490525208s Jul 23 11:16:21.957: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.495142692s STEP: Saw pod success Jul 23 11:16:21.957: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r" satisfied condition "success or failure" Jul 23 11:16:21.960: INFO: Trying to get logs from node hunter-worker pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r container root-ca-test: STEP: delete the pod Jul 23 11:16:22.527: INFO: Waiting for pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r to disappear Jul 23 11:16:22.530: INFO: Pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-whr6r no longer exists STEP: Creating a pod to test consume service account namespace Jul 23 11:16:22.537: INFO: Waiting up to 5m0s for pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn" in namespace "e2e-tests-svcaccounts-mdpfr" to be "success or failure" Jul 23 11:16:22.578: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 40.795482ms Jul 23 11:16:24.581: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043737657s Jul 23 11:16:26.773: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23597153s Jul 23 11:16:29.802: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.2652945s Jul 23 11:16:31.806: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 9.268593491s Jul 23 11:16:34.731: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.193695531s Jul 23 11:16:36.734: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.196650776s STEP: Saw pod success Jul 23 11:16:36.734: INFO: Pod "pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn" satisfied condition "success or failure" Jul 23 11:16:36.736: INFO: Trying to get logs from node hunter-worker pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn container namespace-test: STEP: delete the pod Jul 23 11:16:36.830: INFO: Waiting for pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn to disappear Jul 23 11:16:36.856: INFO: Pod pod-service-account-d9c57805-ccd5-11ea-92a5-0242ac11000b-94bkn no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:16:36.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-mdpfr" for this suite. Jul 23 11:16:44.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:16:44.955: INFO: namespace: e2e-tests-svcaccounts-mdpfr, resource: bindings, ignored listing per whitelist Jul 23 11:16:45.191: INFO: namespace e2e-tests-svcaccounts-mdpfr deletion completed in 8.316307938s • [SLOW TEST:61.633 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:16:45.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-fe47e447-ccd5-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:16:45.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-lfk77" to be "success or failure" Jul 23 11:16:45.529: INFO: Pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.299916ms Jul 23 11:16:47.653: INFO: Pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181699679s Jul 23 11:16:49.656: INFO: Pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184937859s Jul 23 11:16:51.947: INFO: Pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.475073163s STEP: Saw pod success Jul 23 11:16:51.947: INFO: Pod "pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:16:51.998: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b container secret-volume-test: STEP: delete the pod Jul 23 11:16:52.393: INFO: Waiting for pod pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b to disappear Jul 23 11:16:52.522: INFO: Pod pod-projected-secrets-fe485aff-ccd5-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:16:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lfk77" for this suite. Jul 23 11:16:59.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:16:59.316: INFO: namespace: e2e-tests-projected-lfk77, resource: bindings, ignored listing per whitelist Jul 23 11:16:59.350: INFO: namespace e2e-tests-projected-lfk77 deletion completed in 6.823492434s • [SLOW TEST:14.158 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:16:59.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:16:59.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-mzv8w" for this suite. Jul 23 11:17:08.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:17:08.096: INFO: namespace: e2e-tests-services-mzv8w, resource: bindings, ignored listing per whitelist Jul 23 11:17:08.159: INFO: namespace e2e-tests-services-mzv8w deletion completed in 8.455127207s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:8.809 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:17:08.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-6b4d STEP: Creating a pod to test atomic-volume-subpath Jul 23 11:17:08.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6b4d" in namespace "e2e-tests-subpath-r8qpr" to be "success or failure" Jul 23 11:17:08.706: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 92.549929ms Jul 23 11:17:10.709: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096153395s Jul 23 11:17:13.319: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70562661s Jul 23 11:17:15.322: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708520599s Jul 23 11:17:17.326: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712461218s Jul 23 11:17:20.025: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.411806524s Jul 23 11:17:22.061: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.447357992s Jul 23 11:17:24.064: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.450900228s Jul 23 11:17:26.559: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=true. Elapsed: 17.945919212s Jul 23 11:17:28.561: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 19.947893014s Jul 23 11:17:30.565: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 21.951240992s Jul 23 11:17:32.567: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 23.953983628s Jul 23 11:17:34.571: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 25.957506236s Jul 23 11:17:37.139: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 28.52526374s Jul 23 11:17:39.556: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 30.943098185s Jul 23 11:17:41.559: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 32.945822769s Jul 23 11:17:43.563: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 34.949503634s Jul 23 11:17:45.805: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 37.191686169s Jul 23 11:17:47.808: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 39.194488668s Jul 23 11:17:50.437: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 41.824083194s Jul 23 11:17:53.108: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 44.494776275s Jul 23 11:17:55.112: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 46.49908739s Jul 23 11:17:57.438: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 48.824972497s Jul 23 11:18:00.134: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 51.520815172s Jul 23 11:18:02.283: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Running", Reason="", readiness=false. Elapsed: 53.669252091s Jul 23 11:18:04.286: INFO: Pod "pod-subpath-test-downwardapi-6b4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 55.672954692s STEP: Saw pod success Jul 23 11:18:04.286: INFO: Pod "pod-subpath-test-downwardapi-6b4d" satisfied condition "success or failure" Jul 23 11:18:04.289: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-6b4d container test-container-subpath-downwardapi-6b4d: STEP: delete the pod Jul 23 11:18:04.583: INFO: Waiting for pod pod-subpath-test-downwardapi-6b4d to disappear Jul 23 11:18:04.983: INFO: Pod pod-subpath-test-downwardapi-6b4d no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6b4d Jul 23 11:18:04.983: INFO: Deleting pod "pod-subpath-test-downwardapi-6b4d" in namespace "e2e-tests-subpath-r8qpr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:18:04.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-r8qpr" for this suite. Jul 23 11:18:13.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:18:13.565: INFO: namespace: e2e-tests-subpath-r8qpr, resource: bindings, ignored listing per whitelist Jul 23 11:18:13.594: INFO: namespace e2e-tests-subpath-r8qpr deletion completed in 8.099862082s • [SLOW TEST:65.435 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:18:13.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 23 11:18:13.735: INFO: Waiting up to 5m0s for pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-hhc4d" to be "success or failure" Jul 23 11:18:14.235: INFO: Pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 500.525793ms Jul 23 11:18:16.238: INFO: Pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503127811s Jul 23 11:18:18.241: INFO: Pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505805745s Jul 23 11:18:20.244: INFO: Pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.508853312s STEP: Saw pod success Jul 23 11:18:20.244: INFO: Pod "downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:18:20.246: INFO: Trying to get logs from node hunter-worker pod downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 11:18:20.333: INFO: Waiting for pod downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b to disappear Jul 23 11:18:20.343: INFO: Pod downward-api-32e46639-ccd6-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:18:20.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hhc4d" for this suite. Jul 23 11:18:26.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:18:26.464: INFO: namespace: e2e-tests-downward-api-hhc4d, resource: bindings, ignored listing per whitelist Jul 23 11:18:26.493: INFO: namespace e2e-tests-downward-api-hhc4d deletion completed in 6.146428744s • [SLOW TEST:12.899 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:18:26.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-96fh STEP: Creating a pod to test atomic-volume-subpath Jul 23 11:18:27.350: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-96fh" in namespace "e2e-tests-subpath-4qpln" to be "success or failure" Jul 23 11:18:27.371: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Pending", Reason="", readiness=false. Elapsed: 20.467305ms Jul 23 11:18:29.833: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483153509s Jul 23 11:18:33.267: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Pending", Reason="", readiness=false. Elapsed: 5.916463561s Jul 23 11:18:35.271: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.920435258s Jul 23 11:18:37.275: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.924806497s Jul 23 11:18:39.279: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 11.928557243s Jul 23 11:18:41.283: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 13.932965327s Jul 23 11:18:43.349: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 15.999206476s Jul 23 11:18:45.352: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 18.00201219s Jul 23 11:18:47.355: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 20.005307948s Jul 23 11:18:49.358: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 22.008227528s Jul 23 11:18:51.362: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 24.01215429s Jul 23 11:18:53.553: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 26.20323014s Jul 23 11:18:55.557: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 28.206889862s Jul 23 11:18:57.559: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Running", Reason="", readiness=false. Elapsed: 30.209289865s Jul 23 11:18:59.566: INFO: Pod "pod-subpath-test-configmap-96fh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.216277382s STEP: Saw pod success Jul 23 11:18:59.566: INFO: Pod "pod-subpath-test-configmap-96fh" satisfied condition "success or failure" Jul 23 11:18:59.569: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-96fh container test-container-subpath-configmap-96fh: STEP: delete the pod Jul 23 11:18:59.652: INFO: Waiting for pod pod-subpath-test-configmap-96fh to disappear Jul 23 11:18:59.664: INFO: Pod pod-subpath-test-configmap-96fh no longer exists STEP: Deleting pod pod-subpath-test-configmap-96fh Jul 23 11:18:59.664: INFO: Deleting pod "pod-subpath-test-configmap-96fh" in namespace "e2e-tests-subpath-4qpln" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:18:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4qpln" for this suite. Jul 23 11:19:05.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:19:05.835: INFO: namespace: e2e-tests-subpath-4qpln, resource: bindings, ignored listing per whitelist Jul 23 11:19:05.839: INFO: namespace e2e-tests-subpath-4qpln deletion completed in 6.17150075s • [SLOW TEST:39.346 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:19:05.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zprzx STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 23 11:19:05.959: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 23 11:19:48.229: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.19 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zprzx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:19:48.229: INFO: >>> kubeConfig: /root/.kube/config I0723 11:19:48.272290 6 log.go:172] (0xc00186a630) (0xc000e42820) Create stream I0723 11:19:48.272324 6 log.go:172] (0xc00186a630) (0xc000e42820) Stream added, broadcasting: 1 I0723 11:19:48.274359 6 log.go:172] (0xc00186a630) Reply frame received for 1 I0723 11:19:48.274388 6 log.go:172] (0xc00186a630) (0xc0014db720) Create stream I0723 11:19:48.274397 6 log.go:172] (0xc00186a630) (0xc0014db720) Stream added, broadcasting: 3 I0723 11:19:48.275201 6 log.go:172] (0xc00186a630) Reply frame received for 3 I0723 11:19:48.275238 6 log.go:172] (0xc00186a630) (0xc000e42aa0) Create stream I0723 11:19:48.275249 6 log.go:172] (0xc00186a630) (0xc000e42aa0) Stream added, broadcasting: 5 I0723 11:19:48.276026 6 log.go:172] (0xc00186a630) Reply frame received for 5 I0723 11:19:49.401349 6 log.go:172] (0xc00186a630) Data frame received for 3 I0723 11:19:49.401409 6 log.go:172] (0xc0014db720) (3) Data frame handling I0723 11:19:49.401445 6 log.go:172] (0xc0014db720) (3) Data frame sent I0723 11:19:49.401473 6 log.go:172] (0xc00186a630) Data frame received for 3 I0723 11:19:49.401490 6 log.go:172] (0xc0014db720) (3) Data frame handling I0723 11:19:49.401687 6 log.go:172] (0xc00186a630) Data frame received for 5 I0723 11:19:49.401714 6 log.go:172] (0xc000e42aa0) (5) Data frame handling I0723 11:19:49.403406 6 log.go:172] (0xc00186a630) Data frame received for 1 I0723 11:19:49.403446 6 log.go:172] (0xc000e42820) (1) Data frame handling I0723 11:19:49.403475 6 log.go:172] (0xc000e42820) (1) Data frame sent I0723 11:19:49.403496 6 log.go:172] (0xc00186a630) (0xc000e42820) Stream removed, broadcasting: 1 I0723 11:19:49.403519 6 log.go:172] (0xc00186a630) Go away received I0723 11:19:49.403637 6 log.go:172] (0xc00186a630) (0xc000e42820) Stream removed, broadcasting: 1 I0723 11:19:49.403666 6 log.go:172] (0xc00186a630) (0xc0014db720) Stream removed, broadcasting: 3 I0723 11:19:49.403677 6 log.go:172] (0xc00186a630) (0xc000e42aa0) Stream removed, broadcasting: 5 Jul 23 11:19:49.403: INFO: Found all expected endpoints: [netserver-0] Jul 23 11:19:49.406: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.211 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zprzx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:19:49.406: INFO: >>> kubeConfig: /root/.kube/config I0723 11:19:49.438137 6 log.go:172] (0xc00184c4d0) (0xc0014db9a0) Create stream I0723 11:19:49.438156 6 log.go:172] (0xc00184c4d0) (0xc0014db9a0) Stream added, broadcasting: 1 I0723 11:19:49.439566 6 log.go:172] (0xc00184c4d0) Reply frame received for 1 I0723 11:19:49.439612 6 log.go:172] (0xc00184c4d0) (0xc0014dbae0) Create stream I0723 11:19:49.439625 6 log.go:172] (0xc00184c4d0) (0xc0014dbae0) Stream added, broadcasting: 3 I0723 11:19:49.440646 6 log.go:172] (0xc00184c4d0) Reply frame received for 3 I0723 11:19:49.440663 6 log.go:172] (0xc00184c4d0) (0xc001af4460) Create stream I0723 11:19:49.440672 6 log.go:172] (0xc00184c4d0) (0xc001af4460) Stream added, broadcasting: 5 I0723 11:19:49.441668 6 log.go:172] (0xc00184c4d0) Reply frame received for 5 I0723 11:19:50.506126 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:19:50.506155 6 log.go:172] (0xc0014dbae0) (3) Data frame handling I0723 11:19:50.506165 6 log.go:172] (0xc0014dbae0) (3) Data frame sent I0723 11:19:50.506171 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:19:50.506181 6 log.go:172] (0xc0014dbae0) (3) Data frame handling I0723 11:19:50.506202 6 log.go:172] (0xc00184c4d0) Data frame received for 5 I0723 11:19:50.506212 6 log.go:172] (0xc001af4460) (5) Data frame handling I0723 11:19:50.507223 6 log.go:172] (0xc00184c4d0) Data frame received for 1 I0723 11:19:50.507240 6 log.go:172] (0xc0014db9a0) (1) Data frame handling I0723 11:19:50.507249 6 log.go:172] (0xc0014db9a0) (1) Data frame sent I0723 11:19:50.507295 6 log.go:172] (0xc00184c4d0) (0xc0014db9a0) Stream removed, broadcasting: 1 I0723 11:19:50.507414 6 log.go:172] (0xc00184c4d0) (0xc0014db9a0) Stream removed, broadcasting: 1 I0723 11:19:50.507476 6 log.go:172] (0xc00184c4d0) (0xc0014dbae0) Stream removed, broadcasting: 3 I0723 11:19:50.507693 6 log.go:172] (0xc00184c4d0) (0xc001af4460) Stream removed, broadcasting: 5 Jul 23 11:19:50.507: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 I0723 11:19:50.508047 6 log.go:172] (0xc00184c4d0) Go away received Jul 23 11:19:50.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zprzx" for this suite. Jul 23 11:20:14.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:20:14.645: INFO: namespace: e2e-tests-pod-network-test-zprzx, resource: bindings, ignored listing per whitelist Jul 23 11:20:14.669: INFO: namespace e2e-tests-pod-network-test-zprzx deletion completed in 24.157913043s • [SLOW TEST:68.830 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:20:14.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 23 11:20:14.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sl5dc' Jul 23 11:20:21.046: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 23 11:20:21.046: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jul 23 11:20:21.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-sl5dc' Jul 23 11:20:21.203: INFO: stderr: "" Jul 23 11:20:21.203: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:20:21.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sl5dc" for this suite. Jul 23 11:20:43.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:20:43.343: INFO: namespace: e2e-tests-kubectl-sl5dc, resource: bindings, ignored listing per whitelist Jul 23 11:20:43.368: INFO: namespace e2e-tests-kubectl-sl5dc deletion completed in 22.117289219s • [SLOW TEST:28.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:20:43.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:20:43.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jul 23 11:20:43.586: INFO: stderr: "" Jul 23 11:20:43.587: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jul 23 11:20:43.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fqtkr' Jul 23 11:20:43.865: INFO: stderr: "" Jul 23 11:20:43.865: INFO: stdout: "replicationcontroller/redis-master created\n" Jul 23 11:20:43.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fqtkr' Jul 23 11:20:44.153: INFO: stderr: "" Jul 23 11:20:44.153: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jul 23 11:20:45.157: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:45.157: INFO: Found 0 / 1 Jul 23 11:20:46.156: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:46.156: INFO: Found 0 / 1 Jul 23 11:20:47.156: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:47.156: INFO: Found 0 / 1 Jul 23 11:20:48.161: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:48.161: INFO: Found 0 / 1 Jul 23 11:20:49.738: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:49.738: INFO: Found 0 / 1 Jul 23 11:20:51.768: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:51.768: INFO: Found 0 / 1 Jul 23 11:20:52.273: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:52.273: INFO: Found 0 / 1 Jul 23 11:20:53.156: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:53.156: INFO: Found 0 / 1 Jul 23 11:20:54.156: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:54.156: INFO: Found 0 / 1 Jul 23 11:20:55.625: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:55.625: INFO: Found 0 / 1 Jul 23 11:20:56.243: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:56.243: INFO: Found 0 / 1 Jul 23 11:20:57.157: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:57.157: INFO: Found 0 / 1 Jul 23 11:20:58.423: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:58.423: INFO: Found 1 / 1 Jul 23 11:20:58.423: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 23 11:20:58.427: INFO: Selector matched 1 pods for map[app:redis] Jul 23 11:20:58.427: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 23 11:20:58.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xdtpc --namespace=e2e-tests-kubectl-fqtkr' Jul 23 11:20:58.562: INFO: stderr: "" Jul 23 11:20:58.562: INFO: stdout: "Name: redis-master-xdtpc\nNamespace: e2e-tests-kubectl-fqtkr\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.18.0.2\nStart Time: Thu, 23 Jul 2020 11:20:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.212\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://a8c753402fbed474894d66a2b402882ee47af80261a65c0f102fc1e0966042e8\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Jul 2020 11:20:57 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dbt4l (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dbt4l:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dbt4l\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 15s default-scheduler Successfully assigned e2e-tests-kubectl-fqtkr/redis-master-xdtpc to hunter-worker2\n Normal Pulled 13s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Jul 23 11:20:58.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-fqtkr' Jul 23 11:20:58.684: INFO: stderr: "" Jul 23 11:20:58.684: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-fqtkr\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 15s replication-controller Created pod: redis-master-xdtpc\n" Jul 23 11:20:58.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-fqtkr' Jul 23 11:20:58.793: INFO: stderr: "" Jul 23 11:20:58.793: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-fqtkr\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.103.0.20\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.212:6379\nSession Affinity: None\nEvents: \n" Jul 23 11:20:58.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jul 23 11:20:58.924: INFO: stderr: "" Jul 23 11:20:58.924: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:22:18 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 23 Jul 2020 11:20:57 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Jul 2020 11:20:57 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Jul 2020 11:20:57 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Jul 2020 11:20:57 +0000 Fri, 10 Jul 2020 10:23:08 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.8\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 86b921187bcd42a69301f53c2d21b8f0\n System UUID: dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-46fs4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 13d\n kube-system coredns-54ff9cd656-gzt7d 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 13d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kindnet-r4bfs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 13d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-proxy-4jv56 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 13d\n local-path-storage local-path-provisioner-674595c7-jw5rw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jul 23 11:20:58.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-fqtkr' Jul 23 11:20:59.052: INFO: stderr: "" Jul 23 11:20:59.052: INFO: stdout: "Name: e2e-tests-kubectl-fqtkr\nLabels: e2e-framework=kubectl\n e2e-run=d1f26527-ccd1-11ea-92a5-0242ac11000b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:20:59.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fqtkr" for this suite. Jul 23 11:21:39.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:21:39.180: INFO: namespace: e2e-tests-kubectl-fqtkr, resource: bindings, ignored listing per whitelist Jul 23 11:21:39.223: INFO: namespace e2e-tests-kubectl-fqtkr deletion completed in 40.168078932s • [SLOW TEST:55.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:21:39.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:21:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xhw92" for this suite. Jul 23 11:22:47.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:22:47.434: INFO: namespace: e2e-tests-kubelet-test-xhw92, resource: bindings, ignored listing per whitelist Jul 23 11:22:47.469: INFO: namespace e2e-tests-kubelet-test-xhw92 deletion completed in 1m4.083468867s • [SLOW TEST:68.246 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:22:47.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:22:50.051: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 23 11:22:50.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:50.117: INFO: Number of nodes with available pods: 0 Jul 23 11:22:50.117: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:51.689: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:51.964: INFO: Number of nodes with available pods: 0 Jul 23 11:22:51.964: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:52.293: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:52.295: INFO: Number of nodes with available pods: 0 Jul 23 11:22:52.295: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:53.137: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:53.140: INFO: Number of nodes with available pods: 0 Jul 23 11:22:53.140: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:54.162: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:54.371: INFO: Number of nodes with available pods: 0 Jul 23 11:22:54.371: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:55.120: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:55.122: INFO: Number of nodes with available pods: 0 Jul 23 11:22:55.122: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:56.151: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:56.153: INFO: Number of nodes with available pods: 1 Jul 23 11:22:56.153: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:22:57.122: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:57.124: INFO: Number of nodes with available pods: 2 Jul 23 11:22:57.124: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 23 11:22:57.168: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:57.168: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:57.181: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:58.686: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:58.686: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:58.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:22:59.305: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:59.305: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:22:59.308: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:00.359: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:00.359: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:00.359: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:00.362: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:01.187: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:01.187: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:01.187: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:01.190: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:02.185: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:02.185: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:02.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:02.187: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:03.203: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:03.203: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:03.203: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:03.395: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:04.185: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:04.185: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:04.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:04.188: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:05.335: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:05.335: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:05.335: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:05.339: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:06.185: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:06.185: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:06.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:06.188: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:07.186: INFO: Wrong image for pod: daemon-set-5vkph. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:07.186: INFO: Pod daemon-set-5vkph is not available Jul 23 11:23:07.186: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:07.189: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:08.184: INFO: Pod daemon-set-6tjsh is not available Jul 23 11:23:08.184: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:08.187: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:09.184: INFO: Pod daemon-set-6tjsh is not available Jul 23 11:23:09.184: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:09.188: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:10.185: INFO: Pod daemon-set-6tjsh is not available Jul 23 11:23:10.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:10.188: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:11.184: INFO: Pod daemon-set-6tjsh is not available Jul 23 11:23:11.184: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:11.186: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:12.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:12.188: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:13.185: INFO: Wrong image for pod: daemon-set-zdf2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jul 23 11:23:13.185: INFO: Pod daemon-set-zdf2v is not available Jul 23 11:23:13.189: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:14.251: INFO: Pod daemon-set-swvzq is not available Jul 23 11:23:14.254: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 23 11:23:14.259: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:14.260: INFO: Number of nodes with available pods: 1 Jul 23 11:23:14.260: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:23:15.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:15.266: INFO: Number of nodes with available pods: 1 Jul 23 11:23:15.266: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:23:16.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:16.266: INFO: Number of nodes with available pods: 1 Jul 23 11:23:16.266: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:23:17.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:23:17.268: INFO: Number of nodes with available pods: 2 Jul 23 11:23:17.268: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-k2cbj, will wait for the garbage collector to delete the pods Jul 23 11:23:17.340: INFO: Deleting DaemonSet.extensions daemon-set took: 6.375174ms Jul 23 11:23:17.440: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.205302ms Jul 23 11:23:27.644: INFO: Number of nodes with available pods: 0 Jul 23 11:23:27.644: INFO: Number of running nodes: 0, number of available pods: 0 Jul 23 11:23:27.676: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-k2cbj/daemonsets","resourceVersion":"2352264"},"items":null} Jul 23 11:23:27.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-k2cbj/pods","resourceVersion":"2352264"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:23:27.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-k2cbj" for this suite. Jul 23 11:23:33.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:23:33.748: INFO: namespace: e2e-tests-daemonsets-k2cbj, resource: bindings, ignored listing per whitelist Jul 23 11:23:33.771: INFO: namespace e2e-tests-daemonsets-k2cbj deletion completed in 6.07856214s • [SLOW TEST:46.301 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:23:33.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-f1b6d744-ccd6-11ea-92a5-0242ac11000b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:23:40.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m67mj" for this suite. Jul 23 11:24:06.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:24:06.274: INFO: namespace: e2e-tests-configmap-m67mj, resource: bindings, ignored listing per whitelist Jul 23 11:24:06.285: INFO: namespace e2e-tests-configmap-m67mj deletion completed in 26.135345365s • [SLOW TEST:32.514 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:24:06.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 23 11:24:06.494: INFO: Waiting up to 5m0s for pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-j4nd5" to be "success or failure" Jul 23 11:24:06.516: INFO: Pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.760208ms Jul 23 11:24:08.671: INFO: Pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176643143s Jul 23 11:24:10.731: INFO: Pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236380131s Jul 23 11:24:12.735: INFO: Pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.240719995s STEP: Saw pod success Jul 23 11:24:12.735: INFO: Pod "pod-052a0578-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:24:12.738: INFO: Trying to get logs from node hunter-worker pod pod-052a0578-ccd7-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:24:12.757: INFO: Waiting for pod pod-052a0578-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:24:12.784: INFO: Pod pod-052a0578-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:24:12.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j4nd5" for this suite. Jul 23 11:24:18.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:24:18.834: INFO: namespace: e2e-tests-emptydir-j4nd5, resource: bindings, ignored listing per whitelist Jul 23 11:24:19.220: INFO: namespace e2e-tests-emptydir-j4nd5 deletion completed in 6.43102223s • [SLOW TEST:12.934 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:24:19.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:24:19.956: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0d08dafe-ccd7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001b611aa), BlockOwnerDeletion:(*bool)(0xc001b611ab)}} Jul 23 11:24:20.031: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0cfae722-ccd7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc000ef00b2), BlockOwnerDeletion:(*bool)(0xc000ef00b3)}} Jul 23 11:24:20.071: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0cfb9499-ccd7-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc000fec0ea), BlockOwnerDeletion:(*bool)(0xc000fec0eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:24:25.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gmr5q" for this suite. Jul 23 11:24:31.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:24:31.189: INFO: namespace: e2e-tests-gc-gmr5q, resource: bindings, ignored listing per whitelist Jul 23 11:24:31.217: INFO: namespace e2e-tests-gc-gmr5q deletion completed in 6.127552338s • [SLOW TEST:11.997 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:24:31.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 23 11:24:31.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-n7fdz' Jul 23 11:24:31.453: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 23 11:24:31.453: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jul 23 11:24:35.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n7fdz' Jul 23 11:24:35.576: INFO: stderr: "" Jul 23 11:24:35.576: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:24:35.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n7fdz" for this suite. Jul 23 11:24:59.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:24:59.722: INFO: namespace: e2e-tests-kubectl-n7fdz, resource: bindings, ignored listing per whitelist Jul 23 11:24:59.749: INFO: namespace e2e-tests-kubectl-n7fdz deletion completed in 24.165526478s • [SLOW TEST:28.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:24:59.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-24fce0e7-ccd7-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:24:59.909: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-8s48l" to be "success or failure" Jul 23 11:24:59.924: INFO: Pod "pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.231501ms Jul 23 11:25:01.929: INFO: Pod "pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019903065s Jul 23 11:25:03.933: INFO: Pod "pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023574433s STEP: Saw pod success Jul 23 11:25:03.933: INFO: Pod "pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:25:03.935: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Jul 23 11:25:03.987: INFO: Waiting for pod pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:25:04.055: INFO: Pod pod-projected-configmaps-24fe1dd7-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:25:04.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8s48l" for this suite. Jul 23 11:25:10.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:25:10.107: INFO: namespace: e2e-tests-projected-8s48l, resource: bindings, ignored listing per whitelist Jul 23 11:25:10.148: INFO: namespace e2e-tests-projected-8s48l deletion completed in 6.088906043s • [SLOW TEST:10.398 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:25:10.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jul 23 11:25:10.288: INFO: Waiting up to 5m0s for pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-containers-msfjx" to be "success or failure" Jul 23 11:25:10.291: INFO: Pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270672ms Jul 23 11:25:12.294: INFO: Pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00563165s Jul 23 11:25:14.505: INFO: Pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21641492s Jul 23 11:25:16.508: INFO: Pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.219284334s STEP: Saw pod success Jul 23 11:25:16.508: INFO: Pod "client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:25:16.510: INFO: Trying to get logs from node hunter-worker pod client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:25:16.805: INFO: Waiting for pod client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:25:16.836: INFO: Pod client-containers-2b28a485-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:25:16.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-msfjx" for this suite. Jul 23 11:25:23.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:25:23.253: INFO: namespace: e2e-tests-containers-msfjx, resource: bindings, ignored listing per whitelist Jul 23 11:25:23.482: INFO: namespace e2e-tests-containers-msfjx deletion completed in 6.641612588s • [SLOW TEST:13.333 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:25:23.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 11:25:23.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-8l4bj" to be "success or failure" Jul 23 11:25:24.002: INFO: Pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 102.827135ms Jul 23 11:25:26.006: INFO: Pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106631933s Jul 23 11:25:28.009: INFO: Pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.110214976s Jul 23 11:25:30.013: INFO: Pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114143529s STEP: Saw pod success Jul 23 11:25:30.013: INFO: Pod "downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:25:30.016: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 11:25:30.095: INFO: Waiting for pod downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:25:30.111: INFO: Pod downwardapi-volume-334a9161-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:25:30.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8l4bj" for this suite. Jul 23 11:25:36.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:25:36.239: INFO: namespace: e2e-tests-downward-api-8l4bj, resource: bindings, ignored listing per whitelist Jul 23 11:25:36.242: INFO: namespace e2e-tests-downward-api-8l4bj deletion completed in 6.126935985s • [SLOW TEST:12.760 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:25:36.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3ab4c0ec-ccd7-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:25:36.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-dl4pc" to be "success or failure" Jul 23 11:25:36.338: INFO: Pod "pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230091ms Jul 23 11:25:38.342: INFO: Pod "pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008090917s Jul 23 11:25:40.346: INFO: Pod "pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011564529s STEP: Saw pod success Jul 23 11:25:40.346: INFO: Pod "pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:25:40.348: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b container configmap-volume-test: STEP: delete the pod Jul 23 11:25:40.386: INFO: Waiting for pod pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:25:40.419: INFO: Pod pod-configmaps-3ab637ac-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:25:40.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dl4pc" for this suite. Jul 23 11:25:46.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:25:46.478: INFO: namespace: e2e-tests-configmap-dl4pc, resource: bindings, ignored listing per whitelist Jul 23 11:25:46.510: INFO: namespace e2e-tests-configmap-dl4pc deletion completed in 6.087836279s • [SLOW TEST:10.268 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:25:46.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-40df6c92-ccd7-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:25:46.714: INFO: Waiting up to 5m0s for pod "pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-x2mtf" to be "success or failure" Jul 23 11:25:46.783: INFO: Pod "pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.338039ms Jul 23 11:25:48.787: INFO: Pod "pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072780777s Jul 23 11:25:50.791: INFO: Pod "pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076752714s STEP: Saw pod success Jul 23 11:25:50.791: INFO: Pod "pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:25:50.794: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b container secret-volume-test: STEP: delete the pod Jul 23 11:25:50.985: INFO: Waiting for pod pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:25:50.992: INFO: Pod pod-secrets-40e21e78-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:25:50.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x2mtf" for this suite. Jul 23 11:25:57.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:25:57.077: INFO: namespace: e2e-tests-secrets-x2mtf, resource: bindings, ignored listing per whitelist Jul 23 11:25:57.094: INFO: namespace e2e-tests-secrets-x2mtf deletion completed in 6.099179156s • [SLOW TEST:10.583 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:25:57.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-kmhq STEP: Creating a pod to test atomic-volume-subpath Jul 23 11:25:57.245: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kmhq" in namespace "e2e-tests-subpath-4pqvs" to be "success or failure" Jul 23 11:25:57.262: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.546973ms Jul 23 11:25:59.295: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04976843s Jul 23 11:26:01.298: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053268704s Jul 23 11:26:03.303: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057540441s Jul 23 11:26:05.307: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 8.061445695s Jul 23 11:26:07.311: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 10.06620698s Jul 23 11:26:09.315: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 12.069428854s Jul 23 11:26:11.318: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 14.072715528s Jul 23 11:26:13.322: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 16.076753075s Jul 23 11:26:15.326: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 18.080642785s Jul 23 11:26:17.330: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 20.08479825s Jul 23 11:26:19.335: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 22.089354512s Jul 23 11:26:21.339: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 24.093799838s Jul 23 11:26:23.343: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Running", Reason="", readiness=false. Elapsed: 26.098087391s Jul 23 11:26:25.346: INFO: Pod "pod-subpath-test-projected-kmhq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.101072938s STEP: Saw pod success Jul 23 11:26:25.346: INFO: Pod "pod-subpath-test-projected-kmhq" satisfied condition "success or failure" Jul 23 11:26:25.349: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-kmhq container test-container-subpath-projected-kmhq: STEP: delete the pod Jul 23 11:26:25.453: INFO: Waiting for pod pod-subpath-test-projected-kmhq to disappear Jul 23 11:26:25.471: INFO: Pod pod-subpath-test-projected-kmhq no longer exists STEP: Deleting pod pod-subpath-test-projected-kmhq Jul 23 11:26:25.472: INFO: Deleting pod "pod-subpath-test-projected-kmhq" in namespace "e2e-tests-subpath-4pqvs" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:26:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4pqvs" for this suite. Jul 23 11:26:31.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:26:31.514: INFO: namespace: e2e-tests-subpath-4pqvs, resource: bindings, ignored listing per whitelist Jul 23 11:26:31.721: INFO: namespace e2e-tests-subpath-4pqvs deletion completed in 6.243406512s • [SLOW TEST:34.627 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:26:31.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 23 11:26:31.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6gjzl' Jul 23 11:26:31.993: INFO: stderr: "" Jul 23 11:26:31.993: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 23 11:26:37.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6gjzl -o json' Jul 23 11:26:37.142: INFO: stderr: "" Jul 23 11:26:37.142: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-23T11:26:31Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-6gjzl\",\n \"resourceVersion\": \"2352978\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-6gjzl/pods/e2e-test-nginx-pod\",\n \"uid\": \"5be1d3b2-ccd7-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5wczv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5wczv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5wczv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-23T11:26:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-23T11:26:35Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-23T11:26:35Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-23T11:26:31Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://801033e8a2aa5ec6879f4b6527bc746553b00fcdf1b38a23956a6869ec404061\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-23T11:26:34Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.222\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-23T11:26:32Z\"\n }\n}\n" STEP: replace the image in the pod Jul 23 11:26:37.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-6gjzl' Jul 23 11:26:37.438: INFO: stderr: "" Jul 23 11:26:37.438: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 23 11:26:37.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6gjzl' Jul 23 11:26:47.430: INFO: stderr: "" Jul 23 11:26:47.430: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:26:47.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6gjzl" for this suite. Jul 23 11:26:53.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:26:53.482: INFO: namespace: e2e-tests-kubectl-6gjzl, resource: bindings, ignored listing per whitelist Jul 23 11:26:53.529: INFO: namespace e2e-tests-kubectl-6gjzl deletion completed in 6.094493786s • [SLOW TEST:21.808 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:26:53.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 11:26:53.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-qkjm5" to be "success or failure" Jul 23 11:26:53.645: INFO: Pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.994266ms Jul 23 11:26:55.649: INFO: Pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036944135s Jul 23 11:26:57.654: INFO: Pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.041297232s Jul 23 11:26:59.658: INFO: Pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045236119s STEP: Saw pod success Jul 23 11:26:59.658: INFO: Pod "downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:26:59.660: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 11:26:59.677: INFO: Waiting for pod downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:26:59.682: INFO: Pod downwardapi-volume-68c61698-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:26:59.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qkjm5" for this suite. Jul 23 11:27:05.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:27:05.746: INFO: namespace: e2e-tests-projected-qkjm5, resource: bindings, ignored listing per whitelist Jul 23 11:27:05.748: INFO: namespace e2e-tests-projected-qkjm5 deletion completed in 6.064032498s • [SLOW TEST:12.219 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:27:05.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 23 11:27:05.977: INFO: Waiting up to 5m0s for pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-p8rnn" to be "success or failure" Jul 23 11:27:05.993: INFO: Pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.154802ms Jul 23 11:27:08.716: INFO: Pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.738353055s Jul 23 11:27:10.720: INFO: Pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74284296s Jul 23 11:27:12.724: INFO: Pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.747005106s STEP: Saw pod success Jul 23 11:27:12.724: INFO: Pod "pod-70230fcc-ccd7-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:27:12.726: INFO: Trying to get logs from node hunter-worker2 pod pod-70230fcc-ccd7-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:27:12.903: INFO: Waiting for pod pod-70230fcc-ccd7-11ea-92a5-0242ac11000b to disappear Jul 23 11:27:12.940: INFO: Pod pod-70230fcc-ccd7-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:27:12.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p8rnn" for this suite. Jul 23 11:27:18.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:27:19.025: INFO: namespace: e2e-tests-emptydir-p8rnn, resource: bindings, ignored listing per whitelist Jul 23 11:27:19.038: INFO: namespace e2e-tests-emptydir-p8rnn deletion completed in 6.094034721s • [SLOW TEST:13.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:27:19.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-gxcpv Jul 23 11:27:25.177: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-gxcpv STEP: checking the pod's current state and verifying that restartCount is present Jul 23 11:27:25.179: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:31:26.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gxcpv" for this suite. Jul 23 11:31:32.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:31:32.962: INFO: namespace: e2e-tests-container-probe-gxcpv, resource: bindings, ignored listing per whitelist Jul 23 11:31:33.096: INFO: namespace e2e-tests-container-probe-gxcpv deletion completed in 6.185293145s • [SLOW TEST:254.057 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:31:33.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:31:33.180: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:31:34.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-krd9r" for this suite. Jul 23 11:31:40.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:31:40.381: INFO: namespace: e2e-tests-custom-resource-definition-krd9r, resource: bindings, ignored listing per whitelist Jul 23 11:31:40.383: INFO: namespace e2e-tests-custom-resource-definition-krd9r deletion completed in 6.114757974s • [SLOW TEST:7.287 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:31:40.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-vkz7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vkz7h to expose endpoints map[] Jul 23 11:31:40.567: INFO: Get endpoints failed (16.096328ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 23 11:31:41.570: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vkz7h exposes endpoints map[] (1.019771992s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-vkz7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vkz7h to expose endpoints map[pod1:[100]] Jul 23 11:31:45.827: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vkz7h exposes endpoints map[pod1:[100]] (4.248973637s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-vkz7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vkz7h to expose endpoints map[pod1:[100] pod2:[101]] Jul 23 11:31:49.889: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vkz7h exposes endpoints map[pod1:[100] pod2:[101]] (4.057728402s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-vkz7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vkz7h to expose endpoints map[pod2:[101]] Jul 23 11:31:49.910: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vkz7h exposes endpoints map[pod2:[101]] (14.602927ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-vkz7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vkz7h to expose endpoints map[] Jul 23 11:31:50.930: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vkz7h exposes endpoints map[] (1.016596797s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:31:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-vkz7h" for this suite. Jul 23 11:32:13.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:32:13.104: INFO: namespace: e2e-tests-services-vkz7h, resource: bindings, ignored listing per whitelist Jul 23 11:32:13.122: INFO: namespace e2e-tests-services-vkz7h deletion completed in 22.154140527s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.738 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:32:13.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-xpn98 Jul 23 11:32:17.266: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-xpn98 STEP: checking the pod's current state and verifying that restartCount is present Jul 23 11:32:17.269: INFO: Initial restart count of pod liveness-exec is 0 Jul 23 11:33:09.938: INFO: Restart count of pod e2e-tests-container-probe-xpn98/liveness-exec is now 1 (52.66902066s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:33:10.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xpn98" for this suite. Jul 23 11:33:18.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:33:18.214: INFO: namespace: e2e-tests-container-probe-xpn98, resource: bindings, ignored listing per whitelist Jul 23 11:33:18.222: INFO: namespace e2e-tests-container-probe-xpn98 deletion completed in 8.087238404s • [SLOW TEST:65.100 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:33:18.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 23 11:33:18.334: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 23 11:33:18.354: INFO: Waiting for terminating namespaces to be deleted... Jul 23 11:33:18.356: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 23 11:33:18.363: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 11:33:18.363: INFO: Container kube-proxy ready: true, restart count 0 Jul 23 11:33:18.363: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 11:33:18.363: INFO: Container kindnet-cni ready: true, restart count 0 Jul 23 11:33:18.363: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 23 11:33:18.369: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 11:33:18.369: INFO: Container kindnet-cni ready: true, restart count 0 Jul 23 11:33:18.369: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 11:33:18.369: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5085b506-ccd8-11ea-92a5-0242ac11000b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5085b506-ccd8-11ea-92a5-0242ac11000b off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5085b506-ccd8-11ea-92a5-0242ac11000b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:33:26.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9vlwb" for this suite. Jul 23 11:33:34.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:33:34.610: INFO: namespace: e2e-tests-sched-pred-9vlwb, resource: bindings, ignored listing per whitelist Jul 23 11:33:34.621: INFO: namespace e2e-tests-sched-pred-9vlwb deletion completed in 8.136749071s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.399 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:33:34.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 23 11:33:34.805: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:34.808: INFO: Number of nodes with available pods: 0 Jul 23 11:33:34.808: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:33:35.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:35.818: INFO: Number of nodes with available pods: 0 Jul 23 11:33:35.818: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:33:36.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:36.815: INFO: Number of nodes with available pods: 0 Jul 23 11:33:36.815: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:33:37.813: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:37.816: INFO: Number of nodes with available pods: 0 Jul 23 11:33:37.816: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:33:38.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:38.818: INFO: Number of nodes with available pods: 1 Jul 23 11:33:38.818: INFO: Node hunter-worker is running more than one daemon pod Jul 23 11:33:39.812: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:39.814: INFO: Number of nodes with available pods: 2 Jul 23 11:33:39.814: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 23 11:33:39.844: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 23 11:33:39.889: INFO: Number of nodes with available pods: 2 Jul 23 11:33:39.889: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p4rl8, will wait for the garbage collector to delete the pods Jul 23 11:33:40.972: INFO: Deleting DaemonSet.extensions daemon-set took: 7.452252ms Jul 23 11:33:41.173: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.303697ms Jul 23 11:33:44.621: INFO: Number of nodes with available pods: 0 Jul 23 11:33:44.621: INFO: Number of running nodes: 0, number of available pods: 0 Jul 23 11:33:44.624: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p4rl8/daemonsets","resourceVersion":"2354088"},"items":null} Jul 23 11:33:44.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p4rl8/pods","resourceVersion":"2354088"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:33:44.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-p4rl8" for this suite. Jul 23 11:33:50.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:33:50.729: INFO: namespace: e2e-tests-daemonsets-p4rl8, resource: bindings, ignored listing per whitelist Jul 23 11:33:50.736: INFO: namespace e2e-tests-daemonsets-p4rl8 deletion completed in 6.094629041s • [SLOW TEST:16.115 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:33:50.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 23 11:33:57.861: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:33:58.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-bfjhh" for this suite. Jul 23 11:34:22.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:34:23.003: INFO: namespace: e2e-tests-replicaset-bfjhh, resource: bindings, ignored listing per whitelist Jul 23 11:34:23.028: INFO: namespace e2e-tests-replicaset-bfjhh deletion completed in 24.106496571s • [SLOW TEST:32.291 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:34:23.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-d4rsw I0723 11:34:23.486779 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-d4rsw, replica count: 1 I0723 11:34:24.537177 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 11:34:25.537429 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 11:34:26.537705 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 11:34:27.537880 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0723 11:34:28.538094 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 23 11:34:28.677: INFO: Created: latency-svc-7nlk9 Jul 23 11:34:28.719: INFO: Got endpoints: latency-svc-7nlk9 [81.533253ms] Jul 23 11:34:28.774: INFO: Created: latency-svc-nk86k Jul 23 11:34:28.789: INFO: Got endpoints: latency-svc-nk86k [69.302472ms] Jul 23 11:34:28.950: INFO: Created: latency-svc-57xhn Jul 23 11:34:28.953: INFO: Got endpoints: latency-svc-57xhn [233.735522ms] Jul 23 11:34:29.026: INFO: Created: latency-svc-h8zn7 Jul 23 11:34:29.042: INFO: Got endpoints: latency-svc-h8zn7 [322.253116ms] Jul 23 11:34:29.105: INFO: Created: latency-svc-4tbhh Jul 23 11:34:29.140: INFO: Got endpoints: latency-svc-4tbhh [420.609529ms] Jul 23 11:34:29.176: INFO: Created: latency-svc-db95t Jul 23 11:34:29.192: INFO: Got endpoints: latency-svc-db95t [472.073418ms] Jul 23 11:34:29.249: INFO: Created: latency-svc-tshvp Jul 23 11:34:29.252: INFO: Got endpoints: latency-svc-tshvp [532.644928ms] Jul 23 11:34:29.315: INFO: Created: latency-svc-gscg6 Jul 23 11:34:29.333: INFO: Got endpoints: latency-svc-gscg6 [613.785459ms] Jul 23 11:34:29.392: INFO: Created: latency-svc-dqzkv Jul 23 11:34:29.415: INFO: Got endpoints: latency-svc-dqzkv [694.97791ms] Jul 23 11:34:29.445: INFO: Created: latency-svc-c477p Jul 23 11:34:29.453: INFO: Got endpoints: latency-svc-c477p [733.720781ms] Jul 23 11:34:29.475: INFO: Created: latency-svc-glhq8 Jul 23 11:34:29.484: INFO: Got endpoints: latency-svc-glhq8 [764.123469ms] Jul 23 11:34:29.548: INFO: Created: latency-svc-slxnc Jul 23 11:34:29.551: INFO: Got endpoints: latency-svc-slxnc [831.014369ms] Jul 23 11:34:29.584: INFO: Created: latency-svc-k466l Jul 23 11:34:29.598: INFO: Got endpoints: latency-svc-k466l [878.529665ms] Jul 23 11:34:29.620: INFO: Created: latency-svc-5t7sl Jul 23 11:34:29.745: INFO: Got endpoints: latency-svc-5t7sl [1.025368301s] Jul 23 11:34:29.748: INFO: Created: latency-svc-6m9vq Jul 23 11:34:29.773: INFO: Got endpoints: latency-svc-6m9vq [1.053208301s] Jul 23 11:34:29.835: INFO: Created: latency-svc-lnns2 Jul 23 11:34:29.883: INFO: Got endpoints: latency-svc-lnns2 [1.163128507s] Jul 23 11:34:29.926: INFO: Created: latency-svc-j5xsw Jul 23 11:34:29.965: INFO: Got endpoints: latency-svc-j5xsw [1.176356666s] Jul 23 11:34:30.052: INFO: Created: latency-svc-n9rn6 Jul 23 11:34:30.055: INFO: Got endpoints: latency-svc-n9rn6 [1.101613404s] Jul 23 11:34:30.094: INFO: Created: latency-svc-c6cvn Jul 23 11:34:30.109: INFO: Got endpoints: latency-svc-c6cvn [1.067571485s] Jul 23 11:34:30.129: INFO: Created: latency-svc-cv2wq Jul 23 11:34:30.145: INFO: Got endpoints: latency-svc-cv2wq [1.005166414s] Jul 23 11:34:30.209: INFO: Created: latency-svc-m5mdk Jul 23 11:34:30.236: INFO: Got endpoints: latency-svc-m5mdk [1.044170696s] Jul 23 11:34:30.262: INFO: Created: latency-svc-fs2dv Jul 23 11:34:30.368: INFO: Got endpoints: latency-svc-fs2dv [1.115526949s] Jul 23 11:34:30.381: INFO: Created: latency-svc-jxljb Jul 23 11:34:30.398: INFO: Got endpoints: latency-svc-jxljb [1.064780368s] Jul 23 11:34:30.430: INFO: Created: latency-svc-jp6dk Jul 23 11:34:30.440: INFO: Got endpoints: latency-svc-jp6dk [1.025231543s] Jul 23 11:34:30.466: INFO: Created: latency-svc-txf4z Jul 23 11:34:30.518: INFO: Got endpoints: latency-svc-txf4z [1.064143562s] Jul 23 11:34:30.521: INFO: Created: latency-svc-mcwld Jul 23 11:34:30.536: INFO: Got endpoints: latency-svc-mcwld [1.05252369s] Jul 23 11:34:30.563: INFO: Created: latency-svc-j2vtb Jul 23 11:34:30.579: INFO: Got endpoints: latency-svc-j2vtb [1.027975499s] Jul 23 11:34:30.609: INFO: Created: latency-svc-8ffqr Jul 23 11:34:30.679: INFO: Got endpoints: latency-svc-8ffqr [1.081005813s] Jul 23 11:34:30.712: INFO: Created: latency-svc-gdfxl Jul 23 11:34:30.729: INFO: Got endpoints: latency-svc-gdfxl [983.905427ms] Jul 23 11:34:30.754: INFO: Created: latency-svc-vh95s Jul 23 11:34:30.772: INFO: Got endpoints: latency-svc-vh95s [998.868453ms] Jul 23 11:34:30.830: INFO: Created: latency-svc-ksbqc Jul 23 11:34:30.849: INFO: Got endpoints: latency-svc-ksbqc [965.827161ms] Jul 23 11:34:30.898: INFO: Created: latency-svc-972tb Jul 23 11:34:30.910: INFO: Got endpoints: latency-svc-972tb [944.593657ms] Jul 23 11:34:30.994: INFO: Created: latency-svc-tsmmt Jul 23 11:34:31.018: INFO: Got endpoints: latency-svc-tsmmt [963.329234ms] Jul 23 11:34:31.048: INFO: Created: latency-svc-n249k Jul 23 11:34:31.067: INFO: Got endpoints: latency-svc-n249k [957.098299ms] Jul 23 11:34:31.136: INFO: Created: latency-svc-lhs84 Jul 23 11:34:31.173: INFO: Got endpoints: latency-svc-lhs84 [1.0272281s] Jul 23 11:34:31.176: INFO: Created: latency-svc-2s57r Jul 23 11:34:31.187: INFO: Got endpoints: latency-svc-2s57r [950.625472ms] Jul 23 11:34:31.210: INFO: Created: latency-svc-98zsq Jul 23 11:34:31.229: INFO: Got endpoints: latency-svc-98zsq [861.441691ms] Jul 23 11:34:33.564: INFO: Created: latency-svc-69kj4 Jul 23 11:34:33.582: INFO: Got endpoints: latency-svc-69kj4 [3.184041646s] Jul 23 11:34:33.637: INFO: Created: latency-svc-lwqrg Jul 23 11:34:33.640: INFO: Got endpoints: latency-svc-lwqrg [3.199778384s] Jul 23 11:34:33.683: INFO: Created: latency-svc-rjd2b Jul 23 11:34:33.703: INFO: Got endpoints: latency-svc-rjd2b [3.185176535s] Jul 23 11:34:33.725: INFO: Created: latency-svc-q8b5v Jul 23 11:34:33.803: INFO: Got endpoints: latency-svc-q8b5v [3.266949311s] Jul 23 11:34:33.852: INFO: Created: latency-svc-5725d Jul 23 11:34:33.872: INFO: Got endpoints: latency-svc-5725d [3.292892144s] Jul 23 11:34:33.899: INFO: Created: latency-svc-8r29h Jul 23 11:34:33.960: INFO: Got endpoints: latency-svc-8r29h [3.280982969s] Jul 23 11:34:33.977: INFO: Created: latency-svc-8crtm Jul 23 11:34:33.998: INFO: Got endpoints: latency-svc-8crtm [3.268660726s] Jul 23 11:34:34.025: INFO: Created: latency-svc-tkhtr Jul 23 11:34:34.034: INFO: Got endpoints: latency-svc-tkhtr [3.261825564s] Jul 23 11:34:34.056: INFO: Created: latency-svc-84lm6 Jul 23 11:34:34.104: INFO: Got endpoints: latency-svc-84lm6 [3.255207975s] Jul 23 11:34:34.122: INFO: Created: latency-svc-p8nvb Jul 23 11:34:34.149: INFO: Got endpoints: latency-svc-p8nvb [3.238927893s] Jul 23 11:34:34.188: INFO: Created: latency-svc-k67bg Jul 23 11:34:34.203: INFO: Got endpoints: latency-svc-k67bg [3.184127986s] Jul 23 11:34:34.266: INFO: Created: latency-svc-q9j5d Jul 23 11:34:34.281: INFO: Got endpoints: latency-svc-q9j5d [3.214382844s] Jul 23 11:34:34.338: INFO: Created: latency-svc-rqnfb Jul 23 11:34:34.365: INFO: Got endpoints: latency-svc-rqnfb [3.192754685s] Jul 23 11:34:34.487: INFO: Created: latency-svc-qpvbs Jul 23 11:34:34.503: INFO: Got endpoints: latency-svc-qpvbs [3.316698202s] Jul 23 11:34:34.529: INFO: Created: latency-svc-7tb5k Jul 23 11:34:34.539: INFO: Got endpoints: latency-svc-7tb5k [3.30991393s] Jul 23 11:34:34.596: INFO: Created: latency-svc-vg6w9 Jul 23 11:34:34.599: INFO: Got endpoints: latency-svc-vg6w9 [1.016438232s] Jul 23 11:34:34.650: INFO: Created: latency-svc-qtvbg Jul 23 11:34:34.673: INFO: Got endpoints: latency-svc-qtvbg [1.033109674s] Jul 23 11:34:34.740: INFO: Created: latency-svc-hx2s6 Jul 23 11:34:34.743: INFO: Got endpoints: latency-svc-hx2s6 [1.039838604s] Jul 23 11:34:34.799: INFO: Created: latency-svc-6k55c Jul 23 11:34:34.817: INFO: Got endpoints: latency-svc-6k55c [1.013346374s] Jul 23 11:34:34.919: INFO: Created: latency-svc-6pp8s Jul 23 11:34:34.923: INFO: Got endpoints: latency-svc-6pp8s [1.050863555s] Jul 23 11:34:34.990: INFO: Created: latency-svc-qhl8d Jul 23 11:34:35.009: INFO: Got endpoints: latency-svc-qhl8d [1.048878245s] Jul 23 11:34:35.105: INFO: Created: latency-svc-lc4tw Jul 23 11:34:35.109: INFO: Got endpoints: latency-svc-lc4tw [1.111244376s] Jul 23 11:34:35.147: INFO: Created: latency-svc-b4lh2 Jul 23 11:34:35.165: INFO: Got endpoints: latency-svc-b4lh2 [1.131251459s] Jul 23 11:34:35.190: INFO: Created: latency-svc-28vb2 Jul 23 11:34:35.302: INFO: Got endpoints: latency-svc-28vb2 [1.197799935s] Jul 23 11:34:35.316: INFO: Created: latency-svc-nk2h6 Jul 23 11:34:35.334: INFO: Got endpoints: latency-svc-nk2h6 [1.184584321s] Jul 23 11:34:35.359: INFO: Created: latency-svc-88q7n Jul 23 11:34:35.375: INFO: Got endpoints: latency-svc-88q7n [1.172575947s] Jul 23 11:34:35.399: INFO: Created: latency-svc-9sxbg Jul 23 11:34:35.452: INFO: Got endpoints: latency-svc-9sxbg [1.17031679s] Jul 23 11:34:35.483: INFO: Created: latency-svc-snqmm Jul 23 11:34:35.489: INFO: Got endpoints: latency-svc-snqmm [1.123778561s] Jul 23 11:34:35.518: INFO: Created: latency-svc-khrxm Jul 23 11:34:35.533: INFO: Got endpoints: latency-svc-khrxm [1.029123742s] Jul 23 11:34:35.596: INFO: Created: latency-svc-xjgt4 Jul 23 11:34:35.599: INFO: Got endpoints: latency-svc-xjgt4 [1.059067677s] Jul 23 11:34:35.633: INFO: Created: latency-svc-z6hkw Jul 23 11:34:35.641: INFO: Got endpoints: latency-svc-z6hkw [1.041639136s] Jul 23 11:34:35.663: INFO: Created: latency-svc-vx8fr Jul 23 11:34:35.677: INFO: Got endpoints: latency-svc-vx8fr [1.003972444s] Jul 23 11:34:35.757: INFO: Created: latency-svc-76s6w Jul 23 11:34:35.760: INFO: Got endpoints: latency-svc-76s6w [1.017099243s] Jul 23 11:34:35.800: INFO: Created: latency-svc-hnrdm Jul 23 11:34:35.816: INFO: Got endpoints: latency-svc-hnrdm [998.749134ms] Jul 23 11:34:35.844: INFO: Created: latency-svc-nmp78 Jul 23 11:34:35.851: INFO: Got endpoints: latency-svc-nmp78 [928.576431ms] Jul 23 11:34:35.926: INFO: Created: latency-svc-bsld6 Jul 23 11:34:35.945: INFO: Got endpoints: latency-svc-bsld6 [935.641037ms] Jul 23 11:34:35.980: INFO: Created: latency-svc-hjbm7 Jul 23 11:34:35.997: INFO: Got endpoints: latency-svc-hjbm7 [887.529495ms] Jul 23 11:34:36.075: INFO: Created: latency-svc-wvr2q Jul 23 11:34:36.080: INFO: Got endpoints: latency-svc-wvr2q [914.949641ms] Jul 23 11:34:36.112: INFO: Created: latency-svc-v6rvw Jul 23 11:34:36.129: INFO: Got endpoints: latency-svc-v6rvw [826.546569ms] Jul 23 11:34:36.161: INFO: Created: latency-svc-xgtgq Jul 23 11:34:36.218: INFO: Got endpoints: latency-svc-xgtgq [884.502825ms] Jul 23 11:34:36.221: INFO: Created: latency-svc-nnzqx Jul 23 11:34:36.237: INFO: Got endpoints: latency-svc-nnzqx [862.018095ms] Jul 23 11:34:36.268: INFO: Created: latency-svc-mpqq2 Jul 23 11:34:36.285: INFO: Got endpoints: latency-svc-mpqq2 [833.48759ms] Jul 23 11:34:36.311: INFO: Created: latency-svc-clcpk Jul 23 11:34:36.398: INFO: Got endpoints: latency-svc-clcpk [908.500651ms] Jul 23 11:34:36.461: INFO: Created: latency-svc-8fd8r Jul 23 11:34:36.478: INFO: Got endpoints: latency-svc-8fd8r [945.527369ms] Jul 23 11:34:36.572: INFO: Created: latency-svc-sjtnp Jul 23 11:34:36.574: INFO: Got endpoints: latency-svc-sjtnp [975.736111ms] Jul 23 11:34:36.653: INFO: Created: latency-svc-b5hlt Jul 23 11:34:36.670: INFO: Got endpoints: latency-svc-b5hlt [1.029027017s] Jul 23 11:34:36.745: INFO: Created: latency-svc-qd645 Jul 23 11:34:36.748: INFO: Got endpoints: latency-svc-qd645 [1.07073101s] Jul 23 11:34:36.827: INFO: Created: latency-svc-bnwj4 Jul 23 11:34:36.838: INFO: Got endpoints: latency-svc-bnwj4 [1.078126156s] Jul 23 11:34:36.902: INFO: Created: latency-svc-lnt8j Jul 23 11:34:36.904: INFO: Got endpoints: latency-svc-lnt8j [1.088387256s] Jul 23 11:34:36.960: INFO: Created: latency-svc-wtn78 Jul 23 11:34:36.977: INFO: Got endpoints: latency-svc-wtn78 [1.125203197s] Jul 23 11:34:37.039: INFO: Created: latency-svc-x2zv6 Jul 23 11:34:37.042: INFO: Got endpoints: latency-svc-x2zv6 [1.096600325s] Jul 23 11:34:37.098: INFO: Created: latency-svc-hw5fx Jul 23 11:34:37.115: INFO: Got endpoints: latency-svc-hw5fx [1.118154465s] Jul 23 11:34:37.176: INFO: Created: latency-svc-qdhxm Jul 23 11:34:37.181: INFO: Got endpoints: latency-svc-qdhxm [1.101126086s] Jul 23 11:34:37.210: INFO: Created: latency-svc-gg5gl Jul 23 11:34:37.223: INFO: Got endpoints: latency-svc-gg5gl [1.094761501s] Jul 23 11:34:37.246: INFO: Created: latency-svc-nqgxv Jul 23 11:34:37.260: INFO: Got endpoints: latency-svc-nqgxv [1.04144402s] Jul 23 11:34:37.332: INFO: Created: latency-svc-qhx68 Jul 23 11:34:37.335: INFO: Got endpoints: latency-svc-qhx68 [1.097022802s] Jul 23 11:34:37.367: INFO: Created: latency-svc-2vncc Jul 23 11:34:37.380: INFO: Got endpoints: latency-svc-2vncc [1.09489611s] Jul 23 11:34:37.428: INFO: Created: latency-svc-zfstn Jul 23 11:34:37.475: INFO: Got endpoints: latency-svc-zfstn [1.077439614s] Jul 23 11:34:37.487: INFO: Created: latency-svc-5862l Jul 23 11:34:37.522: INFO: Got endpoints: latency-svc-5862l [1.043336238s] Jul 23 11:34:37.558: INFO: Created: latency-svc-cjjqz Jul 23 11:34:37.631: INFO: Got endpoints: latency-svc-cjjqz [1.05694465s] Jul 23 11:34:37.635: INFO: Created: latency-svc-msj4f Jul 23 11:34:37.645: INFO: Got endpoints: latency-svc-msj4f [975.263021ms] Jul 23 11:34:37.667: INFO: Created: latency-svc-rbwmn Jul 23 11:34:37.681: INFO: Got endpoints: latency-svc-rbwmn [933.574682ms] Jul 23 11:34:37.714: INFO: Created: latency-svc-xffd5 Jul 23 11:34:37.730: INFO: Got endpoints: latency-svc-xffd5 [891.555883ms] Jul 23 11:34:39.560: INFO: Created: latency-svc-5h8mf Jul 23 11:34:39.625: INFO: Created: latency-svc-w24mp Jul 23 11:34:39.697: INFO: Got endpoints: latency-svc-5h8mf [2.793123032s] Jul 23 11:34:39.698: INFO: Created: latency-svc-zw5sf Jul 23 11:34:39.728: INFO: Got endpoints: latency-svc-zw5sf [2.685770837s] Jul 23 11:34:39.770: INFO: Got endpoints: latency-svc-w24mp [2.79305651s] Jul 23 11:34:39.772: INFO: Created: latency-svc-v2nq4 Jul 23 11:34:39.785: INFO: Got endpoints: latency-svc-v2nq4 [2.669695366s] Jul 23 11:34:39.835: INFO: Created: latency-svc-t7rk7 Jul 23 11:34:39.838: INFO: Got endpoints: latency-svc-t7rk7 [2.657109059s] Jul 23 11:34:39.895: INFO: Created: latency-svc-9brvj Jul 23 11:34:39.917: INFO: Got endpoints: latency-svc-9brvj [2.693832059s] Jul 23 11:34:39.973: INFO: Created: latency-svc-lgbgf Jul 23 11:34:39.976: INFO: Got endpoints: latency-svc-lgbgf [2.716082957s] Jul 23 11:34:40.057: INFO: Created: latency-svc-plbgh Jul 23 11:34:40.110: INFO: Got endpoints: latency-svc-plbgh [2.77563205s] Jul 23 11:34:40.123: INFO: Created: latency-svc-86tv7 Jul 23 11:34:40.140: INFO: Got endpoints: latency-svc-86tv7 [2.759814865s] Jul 23 11:34:40.171: INFO: Created: latency-svc-c96df Jul 23 11:34:40.189: INFO: Got endpoints: latency-svc-c96df [2.713404057s] Jul 23 11:34:40.288: INFO: Created: latency-svc-wm4t8 Jul 23 11:34:40.309: INFO: Got endpoints: latency-svc-wm4t8 [2.787441748s] Jul 23 11:34:40.346: INFO: Created: latency-svc-p9z27 Jul 23 11:34:40.362: INFO: Got endpoints: latency-svc-p9z27 [2.731032164s] Jul 23 11:34:40.440: INFO: Created: latency-svc-qz76r Jul 23 11:34:40.453: INFO: Got endpoints: latency-svc-qz76r [2.80752267s] Jul 23 11:34:40.491: INFO: Created: latency-svc-cmbrz Jul 23 11:34:40.507: INFO: Got endpoints: latency-svc-cmbrz [2.825430154s] Jul 23 11:34:40.531: INFO: Created: latency-svc-qlh55 Jul 23 11:34:40.571: INFO: Got endpoints: latency-svc-qlh55 [2.841470237s] Jul 23 11:34:40.639: INFO: Created: latency-svc-czrcl Jul 23 11:34:40.663: INFO: Got endpoints: latency-svc-czrcl [965.449304ms] Jul 23 11:34:40.730: INFO: Created: latency-svc-tb7vc Jul 23 11:34:40.747: INFO: Got endpoints: latency-svc-tb7vc [1.019822114s] Jul 23 11:34:40.782: INFO: Created: latency-svc-f6b4n Jul 23 11:34:40.841: INFO: Got endpoints: latency-svc-f6b4n [1.071422918s] Jul 23 11:34:40.854: INFO: Created: latency-svc-9hxtk Jul 23 11:34:40.880: INFO: Got endpoints: latency-svc-9hxtk [1.095139685s] Jul 23 11:34:40.903: INFO: Created: latency-svc-rvzl8 Jul 23 11:34:40.916: INFO: Got endpoints: latency-svc-rvzl8 [1.077232983s] Jul 23 11:34:40.939: INFO: Created: latency-svc-frlgz Jul 23 11:34:40.991: INFO: Got endpoints: latency-svc-frlgz [1.073336244s] Jul 23 11:34:41.011: INFO: Created: latency-svc-t88lv Jul 23 11:34:41.052: INFO: Got endpoints: latency-svc-t88lv [1.076536601s] Jul 23 11:34:41.135: INFO: Created: latency-svc-9zltf Jul 23 11:34:41.166: INFO: Got endpoints: latency-svc-9zltf [1.055815778s] Jul 23 11:34:41.210: INFO: Created: latency-svc-785m6 Jul 23 11:34:41.223: INFO: Got endpoints: latency-svc-785m6 [1.082849507s] Jul 23 11:34:41.275: INFO: Created: latency-svc-nx4hx Jul 23 11:34:41.277: INFO: Got endpoints: latency-svc-nx4hx [1.087642449s] Jul 23 11:34:41.305: INFO: Created: latency-svc-dmcsm Jul 23 11:34:41.319: INFO: Got endpoints: latency-svc-dmcsm [1.00995405s] Jul 23 11:34:41.345: INFO: Created: latency-svc-jb7h5 Jul 23 11:34:41.367: INFO: Got endpoints: latency-svc-jb7h5 [1.004946132s] Jul 23 11:34:41.453: INFO: Created: latency-svc-rvsdk Jul 23 11:34:41.455: INFO: Got endpoints: latency-svc-rvsdk [1.002129016s] Jul 23 11:34:41.490: INFO: Created: latency-svc-2vk76 Jul 23 11:34:41.506: INFO: Got endpoints: latency-svc-2vk76 [998.625848ms] Jul 23 11:34:41.532: INFO: Created: latency-svc-v2hvz Jul 23 11:34:41.542: INFO: Got endpoints: latency-svc-v2hvz [970.597409ms] Jul 23 11:34:41.602: INFO: Created: latency-svc-b5qvt Jul 23 11:34:41.610: INFO: Got endpoints: latency-svc-b5qvt [946.685623ms] Jul 23 11:34:41.647: INFO: Created: latency-svc-4lpvt Jul 23 11:34:41.662: INFO: Got endpoints: latency-svc-4lpvt [915.020131ms] Jul 23 11:34:41.690: INFO: Created: latency-svc-ztgbm Jul 23 11:34:41.755: INFO: Got endpoints: latency-svc-ztgbm [913.901488ms] Jul 23 11:34:41.808: INFO: Created: latency-svc-dqxk2 Jul 23 11:34:41.838: INFO: Got endpoints: latency-svc-dqxk2 [958.232697ms] Jul 23 11:34:41.902: INFO: Created: latency-svc-zgcvb Jul 23 11:34:41.904: INFO: Got endpoints: latency-svc-zgcvb [988.210401ms] Jul 23 11:34:41.941: INFO: Created: latency-svc-2k4hh Jul 23 11:34:41.957: INFO: Got endpoints: latency-svc-2k4hh [966.681505ms] Jul 23 11:34:41.982: INFO: Created: latency-svc-wl4cc Jul 23 11:34:42.000: INFO: Got endpoints: latency-svc-wl4cc [947.357482ms] Jul 23 11:34:42.060: INFO: Created: latency-svc-wxkjk Jul 23 11:34:42.078: INFO: Got endpoints: latency-svc-wxkjk [911.590905ms] Jul 23 11:34:42.102: INFO: Created: latency-svc-ssm92 Jul 23 11:34:42.114: INFO: Got endpoints: latency-svc-ssm92 [890.994485ms] Jul 23 11:34:42.139: INFO: Created: latency-svc-q9ntb Jul 23 11:34:42.189: INFO: Got endpoints: latency-svc-q9ntb [912.162038ms] Jul 23 11:34:42.199: INFO: Created: latency-svc-rjbbb Jul 23 11:34:42.217: INFO: Got endpoints: latency-svc-rjbbb [897.524936ms] Jul 23 11:34:42.368: INFO: Created: latency-svc-7722t Jul 23 11:34:42.371: INFO: Got endpoints: latency-svc-7722t [1.003181097s] Jul 23 11:34:42.433: INFO: Created: latency-svc-lfgnk Jul 23 11:34:42.554: INFO: Got endpoints: latency-svc-lfgnk [1.098682622s] Jul 23 11:34:42.564: INFO: Created: latency-svc-wnxs2 Jul 23 11:34:42.577: INFO: Got endpoints: latency-svc-wnxs2 [1.071300849s] Jul 23 11:34:42.625: INFO: Created: latency-svc-9zzkg Jul 23 11:34:42.637: INFO: Got endpoints: latency-svc-9zzkg [1.09550456s] Jul 23 11:34:42.705: INFO: Created: latency-svc-xlphw Jul 23 11:34:42.720: INFO: Got endpoints: latency-svc-xlphw [1.110873311s] Jul 23 11:34:42.768: INFO: Created: latency-svc-qlxbr Jul 23 11:34:42.782: INFO: Got endpoints: latency-svc-qlxbr [1.119115236s] Jul 23 11:34:42.847: INFO: Created: latency-svc-495z7 Jul 23 11:34:42.850: INFO: Got endpoints: latency-svc-495z7 [1.094396054s] Jul 23 11:34:42.906: INFO: Created: latency-svc-s4jc4 Jul 23 11:34:42.932: INFO: Got endpoints: latency-svc-s4jc4 [1.093752637s] Jul 23 11:34:43.025: INFO: Created: latency-svc-cl6lg Jul 23 11:34:43.037: INFO: Got endpoints: latency-svc-cl6lg [1.132964621s] Jul 23 11:34:43.074: INFO: Created: latency-svc-fdvtb Jul 23 11:34:43.083: INFO: Got endpoints: latency-svc-fdvtb [1.1250677s] Jul 23 11:34:43.188: INFO: Created: latency-svc-lrbfr Jul 23 11:34:43.218: INFO: Got endpoints: latency-svc-lrbfr [1.218236392s] Jul 23 11:34:43.219: INFO: Created: latency-svc-rd5vk Jul 23 11:34:43.233: INFO: Got endpoints: latency-svc-rd5vk [1.15496552s] Jul 23 11:34:43.254: INFO: Created: latency-svc-7c4sq Jul 23 11:34:43.269: INFO: Got endpoints: latency-svc-7c4sq [1.15507835s] Jul 23 11:34:43.332: INFO: Created: latency-svc-4mq6n Jul 23 11:34:43.335: INFO: Got endpoints: latency-svc-4mq6n [1.146631124s] Jul 23 11:34:43.368: INFO: Created: latency-svc-wnwtc Jul 23 11:34:43.377: INFO: Got endpoints: latency-svc-wnwtc [1.160441806s] Jul 23 11:34:43.404: INFO: Created: latency-svc-fq8cm Jul 23 11:34:43.420: INFO: Got endpoints: latency-svc-fq8cm [1.049148664s] Jul 23 11:34:43.488: INFO: Created: latency-svc-zmtx6 Jul 23 11:34:43.491: INFO: Got endpoints: latency-svc-zmtx6 [937.065387ms] Jul 23 11:34:43.547: INFO: Created: latency-svc-tnhqb Jul 23 11:34:43.577: INFO: Got endpoints: latency-svc-tnhqb [999.899284ms] Jul 23 11:34:43.638: INFO: Created: latency-svc-k4wpf Jul 23 11:34:43.651: INFO: Got endpoints: latency-svc-k4wpf [1.01398546s] Jul 23 11:34:43.675: INFO: Created: latency-svc-57qrb Jul 23 11:34:43.694: INFO: Got endpoints: latency-svc-57qrb [973.221342ms] Jul 23 11:34:43.716: INFO: Created: latency-svc-9vd2g Jul 23 11:34:43.736: INFO: Got endpoints: latency-svc-9vd2g [954.187095ms] Jul 23 11:34:43.795: INFO: Created: latency-svc-q977x Jul 23 11:34:43.826: INFO: Got endpoints: latency-svc-q977x [976.372916ms] Jul 23 11:34:43.872: INFO: Created: latency-svc-d7ln2 Jul 23 11:34:43.925: INFO: Got endpoints: latency-svc-d7ln2 [992.800217ms] Jul 23 11:34:43.938: INFO: Created: latency-svc-qcbgk Jul 23 11:34:43.953: INFO: Got endpoints: latency-svc-qcbgk [915.481374ms] Jul 23 11:34:43.980: INFO: Created: latency-svc-kphxh Jul 23 11:34:43.989: INFO: Got endpoints: latency-svc-kphxh [906.32694ms] Jul 23 11:34:44.094: INFO: Created: latency-svc-mzmpr Jul 23 11:34:44.096: INFO: Got endpoints: latency-svc-mzmpr [877.563872ms] Jul 23 11:34:44.243: INFO: Created: latency-svc-m8rc2 Jul 23 11:34:44.250: INFO: Got endpoints: latency-svc-m8rc2 [1.016704314s] Jul 23 11:34:44.286: INFO: Created: latency-svc-mn6f8 Jul 23 11:34:44.319: INFO: Got endpoints: latency-svc-mn6f8 [1.050162815s] Jul 23 11:34:44.406: INFO: Created: latency-svc-j8bdw Jul 23 11:34:44.427: INFO: Got endpoints: latency-svc-j8bdw [1.091548698s] Jul 23 11:34:44.454: INFO: Created: latency-svc-nggvj Jul 23 11:34:44.482: INFO: Got endpoints: latency-svc-nggvj [1.104670804s] Jul 23 11:34:44.560: INFO: Created: latency-svc-7rthd Jul 23 11:34:44.583: INFO: Got endpoints: latency-svc-7rthd [1.163546851s] Jul 23 11:34:44.615: INFO: Created: latency-svc-7jdzx Jul 23 11:34:44.638: INFO: Got endpoints: latency-svc-7jdzx [1.147694179s] Jul 23 11:34:44.729: INFO: Created: latency-svc-mlfm2 Jul 23 11:34:44.753: INFO: Got endpoints: latency-svc-mlfm2 [1.175554361s] Jul 23 11:34:44.778: INFO: Created: latency-svc-szscv Jul 23 11:34:44.789: INFO: Got endpoints: latency-svc-szscv [1.137121251s] Jul 23 11:34:44.847: INFO: Created: latency-svc-c9xxm Jul 23 11:34:44.861: INFO: Got endpoints: latency-svc-c9xxm [1.166990505s] Jul 23 11:34:44.885: INFO: Created: latency-svc-ccdzd Jul 23 11:34:44.903: INFO: Got endpoints: latency-svc-ccdzd [1.167136205s] Jul 23 11:34:44.931: INFO: Created: latency-svc-g85jt Jul 23 11:34:44.945: INFO: Got endpoints: latency-svc-g85jt [1.119011853s] Jul 23 11:34:45.024: INFO: Created: latency-svc-hwsbh Jul 23 11:34:45.035: INFO: Got endpoints: latency-svc-hwsbh [1.110050525s] Jul 23 11:34:45.060: INFO: Created: latency-svc-92pnp Jul 23 11:34:45.077: INFO: Got endpoints: latency-svc-92pnp [1.124743786s] Jul 23 11:34:45.165: INFO: Created: latency-svc-nft2h Jul 23 11:34:45.222: INFO: Got endpoints: latency-svc-nft2h [1.232807449s] Jul 23 11:34:45.222: INFO: Created: latency-svc-6ztqz Jul 23 11:34:45.234: INFO: Got endpoints: latency-svc-6ztqz [1.137728635s] Jul 23 11:34:45.309: INFO: Created: latency-svc-48r9l Jul 23 11:34:45.311: INFO: Got endpoints: latency-svc-48r9l [1.061256265s] Jul 23 11:34:45.347: INFO: Created: latency-svc-jrqxl Jul 23 11:34:45.354: INFO: Got endpoints: latency-svc-jrqxl [1.034325489s] Jul 23 11:34:45.377: INFO: Created: latency-svc-8bphr Jul 23 11:34:45.384: INFO: Got endpoints: latency-svc-8bphr [956.985785ms] Jul 23 11:34:45.467: INFO: Created: latency-svc-j5wgl Jul 23 11:34:45.470: INFO: Got endpoints: latency-svc-j5wgl [988.024891ms] Jul 23 11:34:45.552: INFO: Created: latency-svc-lsmgn Jul 23 11:34:45.631: INFO: Got endpoints: latency-svc-lsmgn [1.047767603s] Jul 23 11:34:45.633: INFO: Created: latency-svc-tbrf7 Jul 23 11:34:45.643: INFO: Got endpoints: latency-svc-tbrf7 [1.004211776s] Jul 23 11:34:45.817: INFO: Created: latency-svc-wbbrl Jul 23 11:34:45.841: INFO: Got endpoints: latency-svc-wbbrl [1.088668939s] Jul 23 11:34:45.869: INFO: Created: latency-svc-4mpm8 Jul 23 11:34:45.883: INFO: Got endpoints: latency-svc-4mpm8 [1.094426229s] Jul 23 11:34:45.911: INFO: Created: latency-svc-fbtc6 Jul 23 11:34:45.967: INFO: Got endpoints: latency-svc-fbtc6 [1.106036949s] Jul 23 11:34:45.982: INFO: Created: latency-svc-7jklh Jul 23 11:34:45.998: INFO: Got endpoints: latency-svc-7jklh [1.095177281s] Jul 23 11:34:46.026: INFO: Created: latency-svc-djwpk Jul 23 11:34:46.034: INFO: Got endpoints: latency-svc-djwpk [1.088376219s] Jul 23 11:34:46.135: INFO: Created: latency-svc-wzrvn Jul 23 11:34:46.138: INFO: Got endpoints: latency-svc-wzrvn [1.10257215s] Jul 23 11:34:46.223: INFO: Created: latency-svc-7kglj Jul 23 11:34:46.272: INFO: Got endpoints: latency-svc-7kglj [1.19467926s] Jul 23 11:34:46.531: INFO: Created: latency-svc-2nqsk Jul 23 11:34:46.556: INFO: Got endpoints: latency-svc-2nqsk [1.334110704s] Jul 23 11:34:46.608: INFO: Created: latency-svc-q52lx Jul 23 11:34:46.622: INFO: Got endpoints: latency-svc-q52lx [1.388469114s] Jul 23 11:34:46.698: INFO: Created: latency-svc-prhf5 Jul 23 11:34:46.726: INFO: Got endpoints: latency-svc-prhf5 [1.415363803s] Jul 23 11:34:46.763: INFO: Created: latency-svc-qt5ct Jul 23 11:34:46.786: INFO: Got endpoints: latency-svc-qt5ct [1.431877542s] Jul 23 11:34:46.847: INFO: Created: latency-svc-glw47 Jul 23 11:34:46.856: INFO: Got endpoints: latency-svc-glw47 [1.472237397s] Jul 23 11:34:46.882: INFO: Created: latency-svc-99qcv Jul 23 11:34:46.899: INFO: Got endpoints: latency-svc-99qcv [1.428757862s] Jul 23 11:34:46.899: INFO: Latencies: [69.302472ms 233.735522ms 322.253116ms 420.609529ms 472.073418ms 532.644928ms 613.785459ms 694.97791ms 733.720781ms 764.123469ms 826.546569ms 831.014369ms 833.48759ms 861.441691ms 862.018095ms 877.563872ms 878.529665ms 884.502825ms 887.529495ms 890.994485ms 891.555883ms 897.524936ms 906.32694ms 908.500651ms 911.590905ms 912.162038ms 913.901488ms 914.949641ms 915.020131ms 915.481374ms 928.576431ms 933.574682ms 935.641037ms 937.065387ms 944.593657ms 945.527369ms 946.685623ms 947.357482ms 950.625472ms 954.187095ms 956.985785ms 957.098299ms 958.232697ms 963.329234ms 965.449304ms 965.827161ms 966.681505ms 970.597409ms 973.221342ms 975.263021ms 975.736111ms 976.372916ms 983.905427ms 988.024891ms 988.210401ms 992.800217ms 998.625848ms 998.749134ms 998.868453ms 999.899284ms 1.002129016s 1.003181097s 1.003972444s 1.004211776s 1.004946132s 1.005166414s 1.00995405s 1.013346374s 1.01398546s 1.016438232s 1.016704314s 1.017099243s 1.019822114s 1.025231543s 1.025368301s 1.0272281s 1.027975499s 1.029027017s 1.029123742s 1.033109674s 1.034325489s 1.039838604s 1.04144402s 1.041639136s 1.043336238s 1.044170696s 1.047767603s 1.048878245s 1.049148664s 1.050162815s 1.050863555s 1.05252369s 1.053208301s 1.055815778s 1.05694465s 1.059067677s 1.061256265s 1.064143562s 1.064780368s 1.067571485s 1.07073101s 1.071300849s 1.071422918s 1.073336244s 1.076536601s 1.077232983s 1.077439614s 1.078126156s 1.081005813s 1.082849507s 1.087642449s 1.088376219s 1.088387256s 1.088668939s 1.091548698s 1.093752637s 1.094396054s 1.094426229s 1.094761501s 1.09489611s 1.095139685s 1.095177281s 1.09550456s 1.096600325s 1.097022802s 1.098682622s 1.101126086s 1.101613404s 1.10257215s 1.104670804s 1.106036949s 1.110050525s 1.110873311s 1.111244376s 1.115526949s 1.118154465s 1.119011853s 1.119115236s 1.123778561s 1.124743786s 1.1250677s 1.125203197s 1.131251459s 1.132964621s 1.137121251s 1.137728635s 1.146631124s 1.147694179s 1.15496552s 1.15507835s 1.160441806s 1.163128507s 1.163546851s 1.166990505s 1.167136205s 1.17031679s 1.172575947s 1.175554361s 1.176356666s 1.184584321s 1.19467926s 1.197799935s 1.218236392s 1.232807449s 1.334110704s 1.388469114s 1.415363803s 1.428757862s 1.431877542s 1.472237397s 2.657109059s 2.669695366s 2.685770837s 2.693832059s 2.713404057s 2.716082957s 2.731032164s 2.759814865s 2.77563205s 2.787441748s 2.79305651s 2.793123032s 2.80752267s 2.825430154s 2.841470237s 3.184041646s 3.184127986s 3.185176535s 3.192754685s 3.199778384s 3.214382844s 3.238927893s 3.255207975s 3.261825564s 3.266949311s 3.268660726s 3.280982969s 3.292892144s 3.30991393s 3.316698202s] Jul 23 11:34:46.899: INFO: 50 %ile: 1.07073101s Jul 23 11:34:46.899: INFO: 90 %ile: 2.79305651s Jul 23 11:34:46.899: INFO: 99 %ile: 3.30991393s Jul 23 11:34:46.899: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:34:46.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-d4rsw" for this suite. Jul 23 11:35:18.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:35:18.984: INFO: namespace: e2e-tests-svc-latency-d4rsw, resource: bindings, ignored listing per whitelist Jul 23 11:35:18.994: INFO: namespace e2e-tests-svc-latency-d4rsw deletion completed in 32.088431287s • [SLOW TEST:55.966 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:35:18.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 23 11:35:19.103: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:35:26.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rhkq6" for this suite. Jul 23 11:35:48.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:35:48.612: INFO: namespace: e2e-tests-init-container-rhkq6, resource: bindings, ignored listing per whitelist Jul 23 11:35:48.635: INFO: namespace e2e-tests-init-container-rhkq6 deletion completed in 22.094865163s • [SLOW TEST:29.640 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:35:48.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:35:48.714: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:35:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tl6vv" for this suite. Jul 23 11:36:42.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:36:42.819: INFO: namespace: e2e-tests-pods-tl6vv, resource: bindings, ignored listing per whitelist Jul 23 11:36:42.889: INFO: namespace e2e-tests-pods-tl6vv deletion completed in 50.11225604s • [SLOW TEST:54.254 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:36:42.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 23 11:36:42.992: INFO: Waiting up to 5m0s for pod "downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-h7q9f" to be "success or failure" Jul 23 11:36:43.010: INFO: Pod "downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.509517ms Jul 23 11:36:45.013: INFO: Pod "downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021878564s Jul 23 11:36:47.017: INFO: Pod "downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025856293s STEP: Saw pod success Jul 23 11:36:47.018: INFO: Pod "downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:36:47.021: INFO: Trying to get logs from node hunter-worker pod downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b container dapi-container: STEP: delete the pod Jul 23 11:36:47.314: INFO: Waiting for pod downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b to disappear Jul 23 11:36:47.374: INFO: Pod downward-api-c8122471-ccd8-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:36:47.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h7q9f" for this suite. Jul 23 11:36:53.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:36:53.447: INFO: namespace: e2e-tests-downward-api-h7q9f, resource: bindings, ignored listing per whitelist Jul 23 11:36:53.458: INFO: namespace e2e-tests-downward-api-h7q9f deletion completed in 6.080541176s • [SLOW TEST:10.568 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:36:53.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jul 23 11:36:53.555: INFO: Waiting up to 5m0s for pod "client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b" in namespace "e2e-tests-containers-nnr4h" to be "success or failure" Jul 23 11:36:53.595: INFO: Pod "client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.489171ms Jul 23 11:36:55.599: INFO: Pod "client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043324998s Jul 23 11:36:57.603: INFO: Pod "client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047415628s STEP: Saw pod success Jul 23 11:36:57.603: INFO: Pod "client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:36:57.606: INFO: Trying to get logs from node hunter-worker2 pod client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:36:57.625: INFO: Waiting for pod client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b to disappear Jul 23 11:36:57.630: INFO: Pod client-containers-ce5e0922-ccd8-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:36:57.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nnr4h" for this suite. Jul 23 11:37:03.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:37:03.703: INFO: namespace: e2e-tests-containers-nnr4h, resource: bindings, ignored listing per whitelist Jul 23 11:37:03.717: INFO: namespace e2e-tests-containers-nnr4h deletion completed in 6.084554767s • [SLOW TEST:10.259 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:37:03.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-kkpk STEP: Creating a pod to test atomic-volume-subpath Jul 23 11:37:03.914: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kkpk" in namespace "e2e-tests-subpath-5cgv4" to be "success or failure" Jul 23 11:37:03.918: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363824ms Jul 23 11:37:05.927: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012232993s Jul 23 11:37:07.930: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015866504s Jul 23 11:37:09.935: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02005976s Jul 23 11:37:11.939: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 8.024592383s Jul 23 11:37:13.943: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 10.028676576s Jul 23 11:37:15.947: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 12.03265238s Jul 23 11:37:17.952: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 14.036998637s Jul 23 11:37:19.955: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 16.04082353s Jul 23 11:37:21.960: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 18.045073795s Jul 23 11:37:23.964: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 20.0493549s Jul 23 11:37:25.967: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 22.052894974s Jul 23 11:37:27.972: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Running", Reason="", readiness=false. Elapsed: 24.05709236s Jul 23 11:37:29.976: INFO: Pod "pod-subpath-test-secret-kkpk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061558038s STEP: Saw pod success Jul 23 11:37:29.976: INFO: Pod "pod-subpath-test-secret-kkpk" satisfied condition "success or failure" Jul 23 11:37:29.980: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-kkpk container test-container-subpath-secret-kkpk: STEP: delete the pod Jul 23 11:37:30.014: INFO: Waiting for pod pod-subpath-test-secret-kkpk to disappear Jul 23 11:37:30.032: INFO: Pod pod-subpath-test-secret-kkpk no longer exists STEP: Deleting pod pod-subpath-test-secret-kkpk Jul 23 11:37:30.032: INFO: Deleting pod "pod-subpath-test-secret-kkpk" in namespace "e2e-tests-subpath-5cgv4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:37:30.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5cgv4" for this suite. Jul 23 11:37:36.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:37:36.067: INFO: namespace: e2e-tests-subpath-5cgv4, resource: bindings, ignored listing per whitelist Jul 23 11:37:36.118: INFO: namespace e2e-tests-subpath-5cgv4 deletion completed in 6.080979191s • [SLOW TEST:32.401 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:37:36.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e7caa829-ccd8-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:37:36.238: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-d6mg7" to be "success or failure" Jul 23 11:37:36.242: INFO: Pod "pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06135ms Jul 23 11:37:38.246: INFO: Pod "pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00831816s Jul 23 11:37:40.250: INFO: Pod "pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012177795s STEP: Saw pod success Jul 23 11:37:40.250: INFO: Pod "pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:37:40.253: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Jul 23 11:37:40.274: INFO: Waiting for pod pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b to disappear Jul 23 11:37:40.278: INFO: Pod pod-projected-configmaps-e7cdbb16-ccd8-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:37:40.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d6mg7" for this suite. Jul 23 11:37:46.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:37:46.376: INFO: namespace: e2e-tests-projected-d6mg7, resource: bindings, ignored listing per whitelist Jul 23 11:37:46.378: INFO: namespace e2e-tests-projected-d6mg7 deletion completed in 6.09698092s • [SLOW TEST:10.259 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:37:46.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jul 23 11:37:46.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jul 23 11:37:46.704: INFO: stderr: "" Jul 23 11:37:46.704: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:37:46.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zq6wc" for this suite. Jul 23 11:37:52.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:37:52.760: INFO: namespace: e2e-tests-kubectl-zq6wc, resource: bindings, ignored listing per whitelist Jul 23 11:37:52.811: INFO: namespace e2e-tests-kubectl-zq6wc deletion completed in 6.10216157s • [SLOW TEST:6.433 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:37:52.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jul 23 11:37:52.916: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:37:53.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-59lx9" for this suite. Jul 23 11:37:59.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:37:59.113: INFO: namespace: e2e-tests-kubectl-59lx9, resource: bindings, ignored listing per whitelist Jul 23 11:37:59.128: INFO: namespace e2e-tests-kubectl-59lx9 deletion completed in 6.098007113s • [SLOW TEST:6.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:37:59.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 23 11:37:59.261: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356084,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 23 11:37:59.262: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356084,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 23 11:38:09.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356104,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 23 11:38:09.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356104,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 23 11:38:19.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356124,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 23 11:38:19.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356124,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 23 11:38:29.284: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356144,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 23 11:38:29.285: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-a,UID:f588b4ae-ccd8-11ea-b2c9-0242ac120008,ResourceVersion:2356144,Generation:0,CreationTimestamp:2020-07-23 11:37:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 23 11:38:39.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-b,UID:0d646868-ccd9-11ea-b2c9-0242ac120008,ResourceVersion:2356164,Generation:0,CreationTimestamp:2020-07-23 11:38:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 23 11:38:39.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-b,UID:0d646868-ccd9-11ea-b2c9-0242ac120008,ResourceVersion:2356164,Generation:0,CreationTimestamp:2020-07-23 11:38:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 23 11:38:49.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-b,UID:0d646868-ccd9-11ea-b2c9-0242ac120008,ResourceVersion:2356184,Generation:0,CreationTimestamp:2020-07-23 11:38:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 23 11:38:49.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zrzhp,SelfLink:/api/v1/namespaces/e2e-tests-watch-zrzhp/configmaps/e2e-watch-test-configmap-b,UID:0d646868-ccd9-11ea-b2c9-0242ac120008,ResourceVersion:2356184,Generation:0,CreationTimestamp:2020-07-23 11:38:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:38:59.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zrzhp" for this suite. Jul 23 11:39:05.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:39:05.334: INFO: namespace: e2e-tests-watch-zrzhp, resource: bindings, ignored listing per whitelist Jul 23 11:39:05.401: INFO: namespace e2e-tests-watch-zrzhp deletion completed in 6.09536626s • [SLOW TEST:66.272 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:39:05.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-1d0a11f0-ccd9-11ea-92a5-0242ac11000b STEP: Creating configMap with name cm-test-opt-upd-1d0a126a-ccd9-11ea-92a5-0242ac11000b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1d0a11f0-ccd9-11ea-92a5-0242ac11000b STEP: Updating configmap cm-test-opt-upd-1d0a126a-ccd9-11ea-92a5-0242ac11000b STEP: Creating configMap with name cm-test-opt-create-1d0a1394-ccd9-11ea-92a5-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:39:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7hd46" for this suite. Jul 23 11:39:35.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:39:35.803: INFO: namespace: e2e-tests-configmap-7hd46, resource: bindings, ignored listing per whitelist Jul 23 11:39:35.858: INFO: namespace e2e-tests-configmap-7hd46 deletion completed in 22.105157167s • [SLOW TEST:30.457 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:39:35.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-2f2c9c59-ccd9-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:39:35.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-hx5bw" to be "success or failure" Jul 23 11:39:36.043: INFO: Pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.662345ms Jul 23 11:39:38.200: INFO: Pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214480576s Jul 23 11:39:40.203: INFO: Pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.218043785s Jul 23 11:39:42.207: INFO: Pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221510391s STEP: Saw pod success Jul 23 11:39:42.207: INFO: Pod "pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:39:42.210: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b container projected-configmap-volume-test: STEP: delete the pod Jul 23 11:39:42.265: INFO: Waiting for pod pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b to disappear Jul 23 11:39:42.271: INFO: Pod pod-projected-configmaps-2f2e9c0e-ccd9-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:39:42.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hx5bw" for this suite. Jul 23 11:39:48.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:39:48.358: INFO: namespace: e2e-tests-projected-hx5bw, resource: bindings, ignored listing per whitelist Jul 23 11:39:48.383: INFO: namespace e2e-tests-projected-hx5bw deletion completed in 6.108622541s • [SLOW TEST:12.525 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:39:48.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5dn8n STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 23 11:39:48.474: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 23 11:40:10.704: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.235:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-5dn8n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:40:10.704: INFO: >>> kubeConfig: /root/.kube/config I0723 11:40:10.742039 6 log.go:172] (0xc00184c4d0) (0xc002100500) Create stream I0723 11:40:10.742071 6 log.go:172] (0xc00184c4d0) (0xc002100500) Stream added, broadcasting: 1 I0723 11:40:10.745409 6 log.go:172] (0xc00184c4d0) Reply frame received for 1 I0723 11:40:10.745485 6 log.go:172] (0xc00184c4d0) (0xc0021005a0) Create stream I0723 11:40:10.745513 6 log.go:172] (0xc00184c4d0) (0xc0021005a0) Stream added, broadcasting: 3 I0723 11:40:10.746620 6 log.go:172] (0xc00184c4d0) Reply frame received for 3 I0723 11:40:10.746660 6 log.go:172] (0xc00184c4d0) (0xc00266e780) Create stream I0723 11:40:10.746675 6 log.go:172] (0xc00184c4d0) (0xc00266e780) Stream added, broadcasting: 5 I0723 11:40:10.747757 6 log.go:172] (0xc00184c4d0) Reply frame received for 5 I0723 11:40:10.855942 6 log.go:172] (0xc00184c4d0) Data frame received for 5 I0723 11:40:10.855989 6 log.go:172] (0xc00266e780) (5) Data frame handling I0723 11:40:10.856017 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:40:10.856046 6 log.go:172] (0xc0021005a0) (3) Data frame handling I0723 11:40:10.856122 6 log.go:172] (0xc0021005a0) (3) Data frame sent I0723 11:40:10.856157 6 log.go:172] (0xc00184c4d0) Data frame received for 3 I0723 11:40:10.856188 6 log.go:172] (0xc0021005a0) (3) Data frame handling I0723 11:40:10.858698 6 log.go:172] (0xc00184c4d0) Data frame received for 1 I0723 11:40:10.858731 6 log.go:172] (0xc002100500) (1) Data frame handling I0723 11:40:10.858758 6 log.go:172] (0xc002100500) (1) Data frame sent I0723 11:40:10.858784 6 log.go:172] (0xc00184c4d0) (0xc002100500) Stream removed, broadcasting: 1 I0723 11:40:10.858813 6 log.go:172] (0xc00184c4d0) Go away received I0723 11:40:10.859131 6 log.go:172] (0xc00184c4d0) (0xc002100500) Stream removed, broadcasting: 1 I0723 11:40:10.859164 6 log.go:172] (0xc00184c4d0) (0xc0021005a0) Stream removed, broadcasting: 3 I0723 11:40:10.859189 6 log.go:172] (0xc00184c4d0) (0xc00266e780) Stream removed, broadcasting: 5 Jul 23 11:40:10.859: INFO: Found all expected endpoints: [netserver-0] Jul 23 11:40:10.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.39:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-5dn8n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 11:40:10.862: INFO: >>> kubeConfig: /root/.kube/config I0723 11:40:10.900377 6 log.go:172] (0xc000e1b1e0) (0xc00266eaa0) Create stream I0723 11:40:10.900404 6 log.go:172] (0xc000e1b1e0) (0xc00266eaa0) Stream added, broadcasting: 1 I0723 11:40:10.903500 6 log.go:172] (0xc000e1b1e0) Reply frame received for 1 I0723 11:40:10.903575 6 log.go:172] (0xc000e1b1e0) (0xc001e46000) Create stream I0723 11:40:10.903598 6 log.go:172] (0xc000e1b1e0) (0xc001e46000) Stream added, broadcasting: 3 I0723 11:40:10.904856 6 log.go:172] (0xc000e1b1e0) Reply frame received for 3 I0723 11:40:10.904907 6 log.go:172] (0xc000e1b1e0) (0xc001cbe500) Create stream I0723 11:40:10.904923 6 log.go:172] (0xc000e1b1e0) (0xc001cbe500) Stream added, broadcasting: 5 I0723 11:40:10.906012 6 log.go:172] (0xc000e1b1e0) Reply frame received for 5 I0723 11:40:10.973929 6 log.go:172] (0xc000e1b1e0) Data frame received for 5 I0723 11:40:10.973993 6 log.go:172] (0xc001cbe500) (5) Data frame handling I0723 11:40:10.974033 6 log.go:172] (0xc000e1b1e0) Data frame received for 3 I0723 11:40:10.974053 6 log.go:172] (0xc001e46000) (3) Data frame handling I0723 11:40:10.974071 6 log.go:172] (0xc001e46000) (3) Data frame sent I0723 11:40:10.974080 6 log.go:172] (0xc000e1b1e0) Data frame received for 3 I0723 11:40:10.974088 6 log.go:172] (0xc001e46000) (3) Data frame handling I0723 11:40:10.975850 6 log.go:172] (0xc000e1b1e0) Data frame received for 1 I0723 11:40:10.975878 6 log.go:172] (0xc00266eaa0) (1) Data frame handling I0723 11:40:10.975904 6 log.go:172] (0xc00266eaa0) (1) Data frame sent I0723 11:40:10.975923 6 log.go:172] (0xc000e1b1e0) (0xc00266eaa0) Stream removed, broadcasting: 1 I0723 11:40:10.975944 6 log.go:172] (0xc000e1b1e0) Go away received I0723 11:40:10.976027 6 log.go:172] (0xc000e1b1e0) (0xc00266eaa0) Stream removed, broadcasting: 1 I0723 11:40:10.976060 6 log.go:172] (0xc000e1b1e0) (0xc001e46000) Stream removed, broadcasting: 3 I0723 11:40:10.976072 6 log.go:172] (0xc000e1b1e0) (0xc001cbe500) Stream removed, broadcasting: 5 Jul 23 11:40:10.976: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:40:10.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-5dn8n" for this suite. Jul 23 11:40:33.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:40:33.026: INFO: namespace: e2e-tests-pod-network-test-5dn8n, resource: bindings, ignored listing per whitelist Jul 23 11:40:33.087: INFO: namespace e2e-tests-pod-network-test-5dn8n deletion completed in 22.10705579s • [SLOW TEST:44.703 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:40:33.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 11:40:33.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-4hn68" to be "success or failure" Jul 23 11:40:33.266: INFO: Pod "downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475471ms Jul 23 11:40:35.271: INFO: Pod "downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008813533s Jul 23 11:40:37.275: INFO: Pod "downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013069174s STEP: Saw pod success Jul 23 11:40:37.275: INFO: Pod "downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:40:37.278: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 11:40:37.304: INFO: Waiting for pod downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b to disappear Jul 23 11:40:37.319: INFO: Pod downwardapi-volume-51514fe1-ccd9-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:40:37.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4hn68" for this suite. Jul 23 11:40:43.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:40:43.385: INFO: namespace: e2e-tests-projected-4hn68, resource: bindings, ignored listing per whitelist Jul 23 11:40:43.426: INFO: namespace e2e-tests-projected-4hn68 deletion completed in 6.103830894s • [SLOW TEST:10.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:40:43.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 23 11:40:44.347: INFO: Pod name wrapped-volume-race-57eb70a6-ccd9-11ea-92a5-0242ac11000b: Found 0 pods out of 5 Jul 23 11:40:49.353: INFO: Pod name wrapped-volume-race-57eb70a6-ccd9-11ea-92a5-0242ac11000b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-57eb70a6-ccd9-11ea-92a5-0242ac11000b in namespace e2e-tests-emptydir-wrapper-rq5dv, will wait for the garbage collector to delete the pods Jul 23 11:42:41.429: INFO: Deleting ReplicationController wrapped-volume-race-57eb70a6-ccd9-11ea-92a5-0242ac11000b took: 6.387018ms Jul 23 11:42:41.630: INFO: Terminating ReplicationController wrapped-volume-race-57eb70a6-ccd9-11ea-92a5-0242ac11000b pods took: 200.236393ms STEP: Creating RC which spawns configmap-volume pods Jul 23 11:43:27.886: INFO: Pod name wrapped-volume-race-b960899e-ccd9-11ea-92a5-0242ac11000b: Found 0 pods out of 5 Jul 23 11:43:32.895: INFO: Pod name wrapped-volume-race-b960899e-ccd9-11ea-92a5-0242ac11000b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b960899e-ccd9-11ea-92a5-0242ac11000b in namespace e2e-tests-emptydir-wrapper-rq5dv, will wait for the garbage collector to delete the pods Jul 23 11:46:06.978: INFO: Deleting ReplicationController wrapped-volume-race-b960899e-ccd9-11ea-92a5-0242ac11000b took: 7.182706ms Jul 23 11:46:07.379: INFO: Terminating ReplicationController wrapped-volume-race-b960899e-ccd9-11ea-92a5-0242ac11000b pods took: 400.333671ms STEP: Creating RC which spawns configmap-volume pods Jul 23 11:46:57.637: INFO: Pod name wrapped-volume-race-3665b4a1-ccda-11ea-92a5-0242ac11000b: Found 0 pods out of 5 Jul 23 11:47:02.688: INFO: Pod name wrapped-volume-race-3665b4a1-ccda-11ea-92a5-0242ac11000b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3665b4a1-ccda-11ea-92a5-0242ac11000b in namespace e2e-tests-emptydir-wrapper-rq5dv, will wait for the garbage collector to delete the pods Jul 23 11:49:36.773: INFO: Deleting ReplicationController wrapped-volume-race-3665b4a1-ccda-11ea-92a5-0242ac11000b took: 8.348011ms Jul 23 11:49:36.874: INFO: Terminating ReplicationController wrapped-volume-race-3665b4a1-ccda-11ea-92a5-0242ac11000b pods took: 100.416751ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:50:28.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rq5dv" for this suite. Jul 23 11:50:36.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:50:36.578: INFO: namespace: e2e-tests-emptydir-wrapper-rq5dv, resource: bindings, ignored listing per whitelist Jul 23 11:50:36.586: INFO: namespace e2e-tests-emptydir-wrapper-rq5dv deletion completed in 8.140757756s • [SLOW TEST:593.159 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:50:36.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 23 11:50:36.671: INFO: PodSpec: initContainers in spec.initContainers Jul 23 11:52:00.099: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b8fc7366-ccda-11ea-92a5-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-init-container-n7d68", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-n7d68/pods/pod-init-b8fc7366-ccda-11ea-92a5-0242ac11000b", UID:"b8fd7896-ccda-11ea-b2c9-0242ac120008", ResourceVersion:"2358295", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731101836, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"671068580"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-47r7g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c38140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-47r7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-47r7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-47r7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ff81d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a60300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ff8270)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ff8290)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001ff8298), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ff829c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731101836, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731101836, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731101836, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731101836, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.57", StartTime:(*v1.Time)(0xc0017520e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001752120), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000770070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://df75814805509e3bad3c7c437dd73e65bff4be35242a79a286860c52e243c68c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001752140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001752100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:52:00.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-n7d68" for this suite. Jul 23 11:52:22.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:52:22.312: INFO: namespace: e2e-tests-init-container-n7d68, resource: bindings, ignored listing per whitelist Jul 23 11:52:22.341: INFO: namespace e2e-tests-init-container-n7d68 deletion completed in 22.22959661s • [SLOW TEST:105.755 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:52:22.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 23 11:52:22.449: INFO: Waiting up to 5m0s for pod "pod-f8084697-ccda-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-x6k2g" to be "success or failure" Jul 23 11:52:22.452: INFO: Pod "pod-f8084697-ccda-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.729113ms Jul 23 11:52:24.460: INFO: Pod "pod-f8084697-ccda-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011201476s Jul 23 11:52:26.485: INFO: Pod "pod-f8084697-ccda-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035270664s STEP: Saw pod success Jul 23 11:52:26.485: INFO: Pod "pod-f8084697-ccda-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:52:26.486: INFO: Trying to get logs from node hunter-worker2 pod pod-f8084697-ccda-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:52:26.507: INFO: Waiting for pod pod-f8084697-ccda-11ea-92a5-0242ac11000b to disappear Jul 23 11:52:26.548: INFO: Pod pod-f8084697-ccda-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:52:26.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-x6k2g" for this suite. Jul 23 11:52:32.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:52:32.577: INFO: namespace: e2e-tests-emptydir-x6k2g, resource: bindings, ignored listing per whitelist Jul 23 11:52:32.626: INFO: namespace e2e-tests-emptydir-x6k2g deletion completed in 6.07559826s • [SLOW TEST:10.285 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:52:32.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:52:36.974: INFO: Waiting up to 5m0s for pod "client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-pods-ctfhb" to be "success or failure" Jul 23 11:52:36.985: INFO: Pod "client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.577058ms Jul 23 11:52:38.988: INFO: Pod "client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014636261s Jul 23 11:52:40.992: INFO: Pod "client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01812983s STEP: Saw pod success Jul 23 11:52:40.992: INFO: Pod "client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:52:40.995: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b container env3cont: STEP: delete the pod Jul 23 11:52:41.010: INFO: Waiting for pod client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:52:41.015: INFO: Pod client-envvars-00aed06e-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:52:41.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ctfhb" for this suite. Jul 23 11:53:45.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:53:45.061: INFO: namespace: e2e-tests-pods-ctfhb, resource: bindings, ignored listing per whitelist Jul 23 11:53:45.105: INFO: namespace e2e-tests-pods-ctfhb deletion completed in 1m4.086146156s • [SLOW TEST:72.479 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:53:45.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0723 11:53:56.757807 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 23 11:53:56.757: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:53:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6wltl" for this suite. Jul 23 11:54:02.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:54:02.878: INFO: namespace: e2e-tests-gc-6wltl, resource: bindings, ignored listing per whitelist Jul 23 11:54:02.908: INFO: namespace e2e-tests-gc-6wltl deletion completed in 6.147571298s • [SLOW TEST:17.802 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:54:02.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:54:02.997: INFO: Creating deployment "test-recreate-deployment" Jul 23 11:54:03.011: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jul 23 11:54:03.020: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jul 23 11:54:05.145: INFO: Waiting deployment "test-recreate-deployment" to complete Jul 23 11:54:05.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:54:08.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:54:09.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102043, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:54:11.150: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jul 23 11:54:11.156: INFO: Updating deployment test-recreate-deployment Jul 23 11:54:11.156: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 23 11:54:11.451: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-l8f7n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f7n/deployments/test-recreate-deployment,UID:33f7987a-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358709,Generation:2,CreationTimestamp:2020-07-23 11:54:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-23 11:54:11 +0000 UTC 2020-07-23 11:54:11 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-23 11:54:11 +0000 UTC 2020-07-23 11:54:03 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jul 23 11:54:11.649: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-l8f7n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f7n/replicasets/test-recreate-deployment-589c4bfd,UID:38e6cc8d-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358706,Generation:1,CreationTimestamp:2020-07-23 11:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 33f7987a-ccdb-11ea-b2c9-0242ac120008 0xc0015c1b4f 0xc0015c1b60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 23 11:54:11.649: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jul 23 11:54:11.650: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-l8f7n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f7n/replicasets/test-recreate-deployment-5bf7f65dc,UID:33fac1aa-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358698,Generation:2,CreationTimestamp:2020-07-23 11:54:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 33f7987a-ccdb-11ea-b2c9-0242ac120008 0xc0015c1eb0 0xc0015c1eb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 23 11:54:11.653: INFO: Pod "test-recreate-deployment-589c4bfd-hgqs8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-hgqs8,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-l8f7n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l8f7n/pods/test-recreate-deployment-589c4bfd-hgqs8,UID:38e88ff7-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358710,Generation:0,CreationTimestamp:2020-07-23 11:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 38e6cc8d-ccdb-11ea-b2c9-0242ac120008 0xc0010a601f 0xc0010a62c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-t6mjw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t6mjw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-t6mjw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010a6440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010a64f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:54:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:54:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:54:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:54:11 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 11:54:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:54:11.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-l8f7n" for this suite. Jul 23 11:54:18.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:54:18.108: INFO: namespace: e2e-tests-deployment-l8f7n, resource: bindings, ignored listing per whitelist Jul 23 11:54:18.307: INFO: namespace e2e-tests-deployment-l8f7n deletion completed in 6.650416263s • [SLOW TEST:15.400 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:54:18.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 23 11:54:18.397: INFO: Waiting up to 5m0s for pod "pod-3d241af1-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-978f9" to be "success or failure" Jul 23 11:54:18.445: INFO: Pod "pod-3d241af1-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.821097ms Jul 23 11:54:20.450: INFO: Pod "pod-3d241af1-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053121862s Jul 23 11:54:22.454: INFO: Pod "pod-3d241af1-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057239218s STEP: Saw pod success Jul 23 11:54:22.454: INFO: Pod "pod-3d241af1-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:54:22.457: INFO: Trying to get logs from node hunter-worker pod pod-3d241af1-ccdb-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:54:22.569: INFO: Waiting for pod pod-3d241af1-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:54:22.804: INFO: Pod pod-3d241af1-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:54:22.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-978f9" for this suite. Jul 23 11:54:28.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:54:28.918: INFO: namespace: e2e-tests-emptydir-978f9, resource: bindings, ignored listing per whitelist Jul 23 11:54:28.920: INFO: namespace e2e-tests-emptydir-978f9 deletion completed in 6.111064537s • [SLOW TEST:10.612 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:54:28.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 11:54:29.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-7g627" to be "success or failure" Jul 23 11:54:30.059: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 72.627384ms Jul 23 11:54:32.110: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123457373s Jul 23 11:54:34.481: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494814862s Jul 23 11:54:36.485: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498764464s Jul 23 11:54:38.488: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.501567378s STEP: Saw pod success Jul 23 11:54:38.488: INFO: Pod "downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:54:38.490: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 11:54:38.524: INFO: Waiting for pod downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:54:38.643: INFO: Pod downwardapi-volume-43ebf783-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:54:38.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7g627" for this suite. Jul 23 11:54:45.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:54:45.112: INFO: namespace: e2e-tests-projected-7g627, resource: bindings, ignored listing per whitelist Jul 23 11:54:45.147: INFO: namespace e2e-tests-projected-7g627 deletion completed in 6.500906771s • [SLOW TEST:16.227 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:54:45.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4d287e7a-ccdb-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume configMaps Jul 23 11:54:45.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-7h4x9" to be "success or failure" Jul 23 11:54:45.282: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.837725ms Jul 23 11:54:47.301: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024183277s Jul 23 11:54:49.312: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034551962s Jul 23 11:54:51.348: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070856374s Jul 23 11:54:53.351: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073596047s Jul 23 11:54:55.353: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076298198s STEP: Saw pod success Jul 23 11:54:55.353: INFO: Pod "pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:54:55.355: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b container configmap-volume-test: STEP: delete the pod Jul 23 11:54:55.414: INFO: Waiting for pod pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:54:55.419: INFO: Pod pod-configmaps-4d299cac-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:54:55.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7h4x9" for this suite. Jul 23 11:55:01.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:55:01.455: INFO: namespace: e2e-tests-configmap-7h4x9, resource: bindings, ignored listing per whitelist Jul 23 11:55:01.502: INFO: namespace e2e-tests-configmap-7h4x9 deletion completed in 6.07976147s • [SLOW TEST:16.354 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:55:01.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 11:55:01.607: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 23 11:55:01.619: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 23 11:55:06.623: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 23 11:55:10.630: INFO: Creating deployment "test-rolling-update-deployment" Jul 23 11:55:10.632: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 23 11:55:10.652: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 23 11:55:13.872: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 23 11:55:13.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:55:15.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:55:17.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102110, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 11:55:19.878: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 23 11:55:19.886: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2rcjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rcjv/deployments/test-rolling-update-deployment,UID:5c476a3d-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358983,Generation:1,CreationTimestamp:2020-07-23 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-23 11:55:10 +0000 UTC 2020-07-23 11:55:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-23 11:55:18 +0000 UTC 2020-07-23 11:55:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 23 11:55:19.890: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2rcjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rcjv/replicasets/test-rolling-update-deployment-75db98fb4c,UID:5c4b454f-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358974,Generation:1,CreationTimestamp:2020-07-23 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5c476a3d-ccdb-11ea-b2c9-0242ac120008 0xc002687707 0xc002687708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 23 11:55:19.890: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 23 11:55:19.890: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2rcjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2rcjv/replicasets/test-rolling-update-controller,UID:56e6a8b1-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358982,Generation:2,CreationTimestamp:2020-07-23 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5c476a3d-ccdb-11ea-b2c9-0242ac120008 0xc00268762f 0xc002687640}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 23 11:55:19.925: INFO: Pod "test-rolling-update-deployment-75db98fb4c-xp764" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-xp764,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2rcjv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2rcjv/pods/test-rolling-update-deployment-75db98fb4c-xp764,UID:5c4d1c6f-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2358973,Generation:0,CreationTimestamp:2020-07-23 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 5c4b454f-ccdb-11ea-b2c9-0242ac120008 0xc002687fd7 0xc002687fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g5scd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g5scd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-g5scd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cb2050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cb2070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:55:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:55:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:55:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 11:55:10 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.241,StartTime:2020-07-23 11:55:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-23 11:55:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e440d373a8206ee634373870b193ae908b1cff18c6f46a0fdccfcfd818183ddb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:55:19.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2rcjv" for this suite. Jul 23 11:55:28.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:55:28.797: INFO: namespace: e2e-tests-deployment-2rcjv, resource: bindings, ignored listing per whitelist Jul 23 11:55:28.813: INFO: namespace e2e-tests-deployment-2rcjv deletion completed in 8.37205573s • [SLOW TEST:27.311 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:55:28.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-673052dc-ccdb-11ea-92a5-0242ac11000b STEP: Creating secret with name s-test-opt-upd-67305348-ccdb-11ea-92a5-0242ac11000b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-673052dc-ccdb-11ea-92a5-0242ac11000b STEP: Updating secret s-test-opt-upd-67305348-ccdb-11ea-92a5-0242ac11000b STEP: Creating secret with name s-test-opt-create-67305380-ccdb-11ea-92a5-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:57:00.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dm9rw" for this suite. Jul 23 11:57:24.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:57:24.784: INFO: namespace: e2e-tests-secrets-dm9rw, resource: bindings, ignored listing per whitelist Jul 23 11:57:24.809: INFO: namespace e2e-tests-secrets-dm9rw deletion completed in 24.085601938s • [SLOW TEST:115.996 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:57:24.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 23 11:57:24.902: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 23 11:57:24.908: INFO: Waiting for terminating namespaces to be deleted... Jul 23 11:57:24.910: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 23 11:57:24.914: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 11:57:24.914: INFO: Container kube-proxy ready: true, restart count 0 Jul 23 11:57:24.914: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 11:57:24.914: INFO: Container kindnet-cni ready: true, restart count 0 Jul 23 11:57:24.914: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 23 11:57:24.919: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 11:57:24.919: INFO: Container kindnet-cni ready: true, restart count 0 Jul 23 11:57:24.919: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 11:57:24.919: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jul 23 11:57:24.972: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker Jul 23 11:57:24.972: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2 Jul 23 11:57:24.972: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker Jul 23 11:57:24.972: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5a6134-ccdb-11ea-92a5-0242ac11000b.16245fbfb84eced5], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2vq85/filler-pod-ac5a6134-ccdb-11ea-92a5-0242ac11000b to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5a6134-ccdb-11ea-92a5-0242ac11000b.16245fc02c4e2663], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5a6134-ccdb-11ea-92a5-0242ac11000b.16245fc1ad5ed047], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5a6134-ccdb-11ea-92a5-0242ac11000b.16245fc202f90391], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5afb08-ccdb-11ea-92a5-0242ac11000b.16245fbfb8a5eb42], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2vq85/filler-pod-ac5afb08-ccdb-11ea-92a5-0242ac11000b to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5afb08-ccdb-11ea-92a5-0242ac11000b.16245fc002c4222d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5afb08-ccdb-11ea-92a5-0242ac11000b.16245fc1a8a1ee71], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac5afb08-ccdb-11ea-92a5-0242ac11000b.16245fc202f51a3c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.16245fc284ce6be0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:57:39.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2vq85" for this suite. Jul 23 11:57:46.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:57:47.197: INFO: namespace: e2e-tests-sched-pred-2vq85, resource: bindings, ignored listing per whitelist Jul 23 11:57:47.235: INFO: namespace e2e-tests-sched-pred-2vq85 deletion completed in 7.590382891s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.426 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:57:47.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 23 11:57:47.417: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-6nbdv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6nbdv/configmaps/e2e-watch-test-watch-closed,UID:b9b25028-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2359388,Generation:0,CreationTimestamp:2020-07-23 11:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 23 11:57:47.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-6nbdv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6nbdv/configmaps/e2e-watch-test-watch-closed,UID:b9b25028-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2359389,Generation:0,CreationTimestamp:2020-07-23 11:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 23 11:57:47.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-6nbdv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6nbdv/configmaps/e2e-watch-test-watch-closed,UID:b9b25028-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2359390,Generation:0,CreationTimestamp:2020-07-23 11:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 23 11:57:47.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-6nbdv,SelfLink:/api/v1/namespaces/e2e-tests-watch-6nbdv/configmaps/e2e-watch-test-watch-closed,UID:b9b25028-ccdb-11ea-b2c9-0242ac120008,ResourceVersion:2359391,Generation:0,CreationTimestamp:2020-07-23 11:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:57:47.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-6nbdv" for this suite. Jul 23 11:57:53.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:57:53.592: INFO: namespace: e2e-tests-watch-6nbdv, resource: bindings, ignored listing per whitelist Jul 23 11:57:53.611: INFO: namespace e2e-tests-watch-6nbdv deletion completed in 6.077398896s • [SLOW TEST:6.375 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:57:53.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jul 23 11:57:53.752: INFO: Waiting up to 5m0s for pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-containers-94sds" to be "success or failure" Jul 23 11:57:53.756: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409045ms Jul 23 11:57:56.329: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576820464s Jul 23 11:57:58.332: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.580011715s Jul 23 11:58:00.516: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764065489s Jul 23 11:58:02.519: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767617267s Jul 23 11:58:06.409: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.656924679s Jul 23 11:58:08.412: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.659920991s Jul 23 11:58:10.432: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.679775977s Jul 23 11:58:12.435: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.683460635s Jul 23 11:58:14.623: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 20.8713227s Jul 23 11:58:17.298: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 23.546681261s Jul 23 11:58:19.302: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 25.550281737s Jul 23 11:58:21.305: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 27.553653492s Jul 23 11:58:23.547: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 29.794858768s Jul 23 11:58:25.550: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 31.797974333s Jul 23 11:58:27.552: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.800658923s STEP: Saw pod success Jul 23 11:58:27.553: INFO: Pod "client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:58:27.555: INFO: Trying to get logs from node hunter-worker2 pod client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:58:27.572: INFO: Waiting for pod client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:58:27.578: INFO: Pod client-containers-bd80e546-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:58:27.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-94sds" for this suite. Jul 23 11:58:33.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:58:33.702: INFO: namespace: e2e-tests-containers-94sds, resource: bindings, ignored listing per whitelist Jul 23 11:58:33.702: INFO: namespace e2e-tests-containers-94sds deletion completed in 6.122123159s • [SLOW TEST:40.091 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:58:33.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-d5634f19-ccdb-11ea-92a5-0242ac11000b STEP: Creating a pod to test consume secrets Jul 23 11:58:33.843: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-jbs8d" to be "success or failure" Jul 23 11:58:33.854: INFO: Pod "pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.577459ms Jul 23 11:58:35.857: INFO: Pod "pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013640392s Jul 23 11:58:37.917: INFO: Pod "pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074065683s STEP: Saw pod success Jul 23 11:58:37.917: INFO: Pod "pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:58:37.920: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b container projected-secret-volume-test: STEP: delete the pod Jul 23 11:58:38.249: INFO: Waiting for pod pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:58:38.456: INFO: Pod pod-projected-secrets-d566159c-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:58:38.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jbs8d" for this suite. Jul 23 11:58:48.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:58:48.570: INFO: namespace: e2e-tests-projected-jbs8d, resource: bindings, ignored listing per whitelist Jul 23 11:58:48.849: INFO: namespace e2e-tests-projected-jbs8d deletion completed in 10.390319861s • [SLOW TEST:15.146 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:58:48.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 11:58:50.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-s88mz" to be "success or failure" Jul 23 11:58:50.170: INFO: Pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.251494ms Jul 23 11:58:52.174: INFO: Pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060757683s Jul 23 11:58:54.541: INFO: Pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427908283s Jul 23 11:58:56.545: INFO: Pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431453979s STEP: Saw pod success Jul 23 11:58:56.545: INFO: Pod "downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:58:56.547: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 11:58:56.934: INFO: Waiting for pod downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:58:57.209: INFO: Pod downwardapi-volume-defbf636-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:58:57.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-s88mz" for this suite. Jul 23 11:59:03.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:59:03.380: INFO: namespace: e2e-tests-downward-api-s88mz, resource: bindings, ignored listing per whitelist Jul 23 11:59:03.409: INFO: namespace e2e-tests-downward-api-s88mz deletion completed in 6.195505514s • [SLOW TEST:14.560 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:59:03.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 23 11:59:03.512: INFO: Waiting up to 5m0s for pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-x5xjd" to be "success or failure" Jul 23 11:59:03.519: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.084942ms Jul 23 11:59:06.745: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.233090159s Jul 23 11:59:08.864: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.352341442s Jul 23 11:59:10.867: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 7.355567968s Jul 23 11:59:12.871: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.358992008s STEP: Saw pod success Jul 23 11:59:12.871: INFO: Pod "pod-e712089d-ccdb-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 11:59:12.873: INFO: Trying to get logs from node hunter-worker pod pod-e712089d-ccdb-11ea-92a5-0242ac11000b container test-container: STEP: delete the pod Jul 23 11:59:13.045: INFO: Waiting for pod pod-e712089d-ccdb-11ea-92a5-0242ac11000b to disappear Jul 23 11:59:13.074: INFO: Pod pod-e712089d-ccdb-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:59:13.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-x5xjd" for this suite. Jul 23 11:59:21.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 11:59:21.237: INFO: namespace: e2e-tests-emptydir-x5xjd, resource: bindings, ignored listing per whitelist Jul 23 11:59:21.261: INFO: namespace e2e-tests-emptydir-x5xjd deletion completed in 8.183707347s • [SLOW TEST:17.852 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 11:59:21.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Jul 23 11:59:36.496: INFO: 5 pods remaining Jul 23 11:59:36.496: INFO: 5 pods has nil DeletionTimestamp Jul 23 11:59:36.496: INFO: STEP: Gathering metrics W0723 11:59:40.960919 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 23 11:59:40.960: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 11:59:40.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-m2w4r" for this suite. Jul 23 12:00:11.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:00:11.041: INFO: namespace: e2e-tests-gc-m2w4r, resource: bindings, ignored listing per whitelist Jul 23 12:00:11.103: INFO: namespace e2e-tests-gc-m2w4r deletion completed in 30.098742692s • [SLOW TEST:49.842 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:00:11.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 23 12:00:12.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:24.366: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 23 12:00:24.366: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jul 23 12:00:24.466: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jul 23 12:00:24.486: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 23 12:00:24.518: INFO: scanned /root for discovery docs: Jul 23 12:00:24.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:40.857: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 23 12:00:40.857: INFO: stdout: "Created e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e\nScaling up e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 23 12:00:40.857: INFO: stdout: "Created e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e\nScaling up e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 23 12:00:40.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:40.956: INFO: stderr: "" Jul 23 12:00:40.956: INFO: stdout: "e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e-ncvnv " Jul 23 12:00:40.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e-ncvnv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:41.048: INFO: stderr: "" Jul 23 12:00:41.048: INFO: stdout: "true" Jul 23 12:00:41.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e-ncvnv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:41.141: INFO: stderr: "" Jul 23 12:00:41.141: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 23 12:00:41.141: INFO: e2e-test-nginx-rc-c45cbf322cb937970b1db698edb1052e-ncvnv is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jul 23 12:00:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pr64w' Jul 23 12:00:41.540: INFO: stderr: "" Jul 23 12:00:41.540: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:00:41.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pr64w" for this suite. Jul 23 12:00:48.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:00:48.081: INFO: namespace: e2e-tests-kubectl-pr64w, resource: bindings, ignored listing per whitelist Jul 23 12:00:48.090: INFO: namespace e2e-tests-kubectl-pr64w deletion completed in 6.471579648s • [SLOW TEST:36.988 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:00:48.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-v26z6 Jul 23 12:00:54.299: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-v26z6 STEP: checking the pod's current state and verifying that restartCount is present Jul 23 12:00:54.301: INFO: Initial restart count of pod liveness-http is 0 Jul 23 12:01:13.790: INFO: Restart count of pod e2e-tests-container-probe-v26z6/liveness-http is now 1 (19.489445091s elapsed) Jul 23 12:01:38.484: INFO: Restart count of pod e2e-tests-container-probe-v26z6/liveness-http is now 2 (44.183208407s elapsed) Jul 23 12:02:10.532: INFO: Restart count of pod e2e-tests-container-probe-v26z6/liveness-http is now 3 (1m16.231042426s elapsed) Jul 23 12:02:36.868: INFO: Restart count of pod e2e-tests-container-probe-v26z6/liveness-http is now 4 (1m42.567510214s elapsed) Jul 23 12:03:37.210: INFO: Restart count of pod e2e-tests-container-probe-v26z6/liveness-http is now 5 (2m42.909389986s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:03:37.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-v26z6" for this suite. Jul 23 12:03:43.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:03:43.681: INFO: namespace: e2e-tests-container-probe-v26z6, resource: bindings, ignored listing per whitelist Jul 23 12:03:43.731: INFO: namespace e2e-tests-container-probe-v26z6 deletion completed in 6.352344244s • [SLOW TEST:175.640 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:03:43.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jul 23 12:03:44.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pp89n' Jul 23 12:03:44.477: INFO: stderr: "" Jul 23 12:03:44.477: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jul 23 12:03:45.481: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:45.481: INFO: Found 0 / 1 Jul 23 12:03:46.715: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:46.715: INFO: Found 0 / 1 Jul 23 12:03:47.674: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:47.674: INFO: Found 0 / 1 Jul 23 12:03:48.480: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:48.480: INFO: Found 0 / 1 Jul 23 12:03:49.481: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:49.481: INFO: Found 1 / 1 Jul 23 12:03:49.481: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 23 12:03:49.483: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:03:49.483: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jul 23 12:03:49.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n' Jul 23 12:03:49.584: INFO: stderr: "" Jul 23 12:03:49.584: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jul 12:03:48.620 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jul 12:03:48.620 # Server started, Redis version 3.2.12\n1:M 23 Jul 12:03:48.620 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jul 12:03:48.620 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jul 23 12:03:49.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n --tail=1' Jul 23 12:03:49.669: INFO: stderr: "" Jul 23 12:03:49.669: INFO: stdout: "1:M 23 Jul 12:03:48.620 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jul 23 12:03:49.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n --limit-bytes=1' Jul 23 12:03:49.764: INFO: stderr: "" Jul 23 12:03:49.764: INFO: stdout: " " STEP: exposing timestamps Jul 23 12:03:49.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n --tail=1 --timestamps' Jul 23 12:03:49.853: INFO: stderr: "" Jul 23 12:03:49.853: INFO: stdout: "2020-07-23T12:03:48.62028151Z 1:M 23 Jul 12:03:48.620 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jul 23 12:03:52.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n --since=1s' Jul 23 12:03:52.459: INFO: stderr: "" Jul 23 12:03:52.459: INFO: stdout: "" Jul 23 12:03:52.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-47bsg redis-master --namespace=e2e-tests-kubectl-pp89n --since=24h' Jul 23 12:03:52.557: INFO: stderr: "" Jul 23 12:03:52.557: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jul 12:03:48.620 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jul 12:03:48.620 # Server started, Redis version 3.2.12\n1:M 23 Jul 12:03:48.620 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jul 12:03:48.620 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jul 23 12:03:52.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pp89n' Jul 23 12:03:52.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 23 12:03:52.658: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jul 23 12:03:52.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pp89n' Jul 23 12:03:52.754: INFO: stderr: "No resources found.\n" Jul 23 12:03:52.754: INFO: stdout: "" Jul 23 12:03:52.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pp89n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 23 12:03:52.903: INFO: stderr: "" Jul 23 12:03:52.903: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:03:52.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pp89n" for this suite. Jul 23 12:03:59.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:03:59.125: INFO: namespace: e2e-tests-kubectl-pp89n, resource: bindings, ignored listing per whitelist Jul 23 12:03:59.146: INFO: namespace e2e-tests-kubectl-pp89n deletion completed in 6.239500403s • [SLOW TEST:15.415 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:03:59.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:04:03.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-p7bqk" for this suite. Jul 23 12:04:43.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:04:43.347: INFO: namespace: e2e-tests-kubelet-test-p7bqk, resource: bindings, ignored listing per whitelist Jul 23 12:04:43.386: INFO: namespace e2e-tests-kubelet-test-p7bqk deletion completed in 40.067867567s • [SLOW TEST:44.239 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:04:43.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b1bd1e86-ccdc-11ea-92a5-0242ac11000b STEP: Creating configMap with name cm-test-opt-upd-b1bd1ec1-ccdc-11ea-92a5-0242ac11000b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b1bd1e86-ccdc-11ea-92a5-0242ac11000b STEP: Updating configmap cm-test-opt-upd-b1bd1ec1-ccdc-11ea-92a5-0242ac11000b STEP: Creating configMap with name cm-test-opt-create-b1bd1eda-ccdc-11ea-92a5-0242ac11000b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:04:51.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gngrq" for this suite. Jul 23 12:05:15.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:05:15.761: INFO: namespace: e2e-tests-projected-gngrq, resource: bindings, ignored listing per whitelist Jul 23 12:05:15.775: INFO: namespace e2e-tests-projected-gngrq deletion completed in 24.159450572s • [SLOW TEST:32.390 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:05:15.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 12:05:16.466: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 23 12:05:21.482: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 23 12:05:21.482: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 23 12:05:23.486: INFO: Creating deployment "test-rollover-deployment" Jul 23 12:05:23.538: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 23 12:05:25.560: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 23 12:05:25.565: INFO: Ensure that both replica sets have 1 created replica Jul 23 12:05:25.569: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 23 12:05:25.574: INFO: Updating deployment test-rollover-deployment Jul 23 12:05:25.574: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 23 12:05:27.601: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 23 12:05:27.606: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 23 12:05:27.615: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:27.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102725, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:29.623: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:29.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102725, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:31.624: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:31.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102730, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:33.624: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:33.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102730, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:35.625: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:35.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102730, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:37.624: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:37.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102730, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:39.623: INFO: all replica sets need to contain the pod-template-hash label Jul 23 12:05:39.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102730, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731102723, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 23 12:05:41.624: INFO: Jul 23 12:05:41.624: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 23 12:05:41.633: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gd8pk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd8pk/deployments/test-rollover-deployment,UID:c991ef4d-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2360832,Generation:2,CreationTimestamp:2020-07-23 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-23 12:05:23 +0000 UTC 2020-07-23 12:05:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-23 12:05:40 +0000 UTC 2020-07-23 12:05:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 23 12:05:41.635: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gd8pk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd8pk/replicasets/test-rollover-deployment-5b8479fdb6,UID:cad07977-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2360823,Generation:2,CreationTimestamp:2020-07-23 12:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c991ef4d-ccdc-11ea-b2c9-0242ac120008 0xc002414277 0xc002414278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 23 12:05:41.636: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 23 12:05:41.636: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gd8pk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd8pk/replicasets/test-rollover-controller,UID:c53f5062-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2360831,Generation:2,CreationTimestamp:2020-07-23 12:05:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c991ef4d-ccdc-11ea-b2c9-0242ac120008 0xc0024140e7 0xc0024140e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 23 12:05:41.636: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gd8pk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gd8pk/replicasets/test-rollover-deployment-58494b7559,UID:c99af29e-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2360788,Generation:2,CreationTimestamp:2020-07-23 12:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c991ef4d-ccdc-11ea-b2c9-0242ac120008 0xc0024141a7 0xc0024141a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 23 12:05:41.639: INFO: Pod "test-rollover-deployment-5b8479fdb6-8cg4r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-8cg4r,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gd8pk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gd8pk/pods/test-rollover-deployment-5b8479fdb6-8cg4r,UID:cadc5e43-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2360801,Generation:0,CreationTimestamp:2020-07-23 12:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 cad07977-ccdc-11ea-b2c9-0242ac120008 0xc00222d487 0xc00222d488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-j8zx7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j8zx7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-j8zx7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00222d500} {node.kubernetes.io/unreachable Exists NoExecute 0xc00222d520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:05:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:05:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:05:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:05:25 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.253,StartTime:2020-07-23 12:05:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-23 12:05:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://43b8345388d7d123831986f82249289f83ca8790a6d59a53a74ee6ee84e60f39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:05:41.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gd8pk" for this suite. Jul 23 12:05:49.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:05:49.669: INFO: namespace: e2e-tests-deployment-gd8pk, resource: bindings, ignored listing per whitelist Jul 23 12:05:49.731: INFO: namespace e2e-tests-deployment-gd8pk deletion completed in 8.089593389s • [SLOW TEST:33.955 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:05:49.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-c9fs STEP: Creating a pod to test atomic-volume-subpath Jul 23 12:05:49.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c9fs" in namespace "e2e-tests-subpath-4scts" to be "success or failure" Jul 23 12:05:49.920: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Pending", Reason="", readiness=false. Elapsed: 56.604943ms Jul 23 12:05:51.923: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059931445s Jul 23 12:05:53.927: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063846343s Jul 23 12:05:55.931: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068101931s Jul 23 12:05:57.935: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 8.071850974s Jul 23 12:05:59.939: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 10.075448639s Jul 23 12:06:01.943: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 12.079806892s Jul 23 12:06:03.948: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 14.084369295s Jul 23 12:06:05.953: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 16.089429135s Jul 23 12:06:07.957: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 18.0938681s Jul 23 12:06:09.961: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 20.097991644s Jul 23 12:06:11.966: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 22.102483233s Jul 23 12:06:13.971: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 24.107854205s Jul 23 12:06:15.975: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Running", Reason="", readiness=false. Elapsed: 26.112023072s Jul 23 12:06:17.979: INFO: Pod "pod-subpath-test-configmap-c9fs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.116147473s STEP: Saw pod success Jul 23 12:06:17.979: INFO: Pod "pod-subpath-test-configmap-c9fs" satisfied condition "success or failure" Jul 23 12:06:17.982: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-c9fs container test-container-subpath-configmap-c9fs: STEP: delete the pod Jul 23 12:06:18.266: INFO: Waiting for pod pod-subpath-test-configmap-c9fs to disappear Jul 23 12:06:18.311: INFO: Pod pod-subpath-test-configmap-c9fs no longer exists STEP: Deleting pod pod-subpath-test-configmap-c9fs Jul 23 12:06:18.311: INFO: Deleting pod "pod-subpath-test-configmap-c9fs" in namespace "e2e-tests-subpath-4scts" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:06:18.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4scts" for this suite. Jul 23 12:06:24.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:06:24.552: INFO: namespace: e2e-tests-subpath-4scts, resource: bindings, ignored listing per whitelist Jul 23 12:06:24.578: INFO: namespace e2e-tests-subpath-4scts deletion completed in 6.222386006s • [SLOW TEST:34.847 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:06:24.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 12:06:24.746: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 23 12:06:29.750: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 23 12:06:29.750: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 23 12:06:29.766: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nkfpb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nkfpb/deployments/test-cleanup-deployment,UID:f1118fa0-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361023,Generation:1,CreationTimestamp:2020-07-23 12:06:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jul 23 12:06:29.772: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jul 23 12:06:29.772: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 23 12:06:29.772: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-nkfpb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nkfpb/replicasets/test-cleanup-controller,UID:ee0eba81-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361024,Generation:1,CreationTimestamp:2020-07-23 12:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment f1118fa0-ccdc-11ea-b2c9-0242ac120008 0xc000cfbbb7 0xc000cfbbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 23 12:06:29.791: INFO: Pod "test-cleanup-controller-2rrn4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-2rrn4,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-nkfpb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nkfpb/pods/test-cleanup-controller-2rrn4,UID:ee15d8f0-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361017,Generation:0,CreationTimestamp:2020-07-23 12:06:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ee0eba81-ccdc-11ea-b2c9-0242ac120008 0xc0026225c7 0xc0026225c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dh5wh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dh5wh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dh5wh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002622650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002622670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:06:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:06:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:06:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:06:24 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.254,StartTime:2020-07-23 12:06:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:06:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e0b4ae30320a8abe35b8aff4210bc8b7a1fb0f06815fde4763dcb19f45f3a096}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:06:29.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nkfpb" for this suite. Jul 23 12:06:37.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:06:37.928: INFO: namespace: e2e-tests-deployment-nkfpb, resource: bindings, ignored listing per whitelist Jul 23 12:06:37.976: INFO: namespace e2e-tests-deployment-nkfpb deletion completed in 8.154120589s • [SLOW TEST:13.398 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:06:37.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jul 23 12:06:38.674: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361085,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 23 12:06:38.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361086,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jul 23 12:06:38.674: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361087,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jul 23 12:06:48.721: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361108,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 23 12:06:48.721: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361109,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jul 23 12:06:48.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pgs9g,SelfLink:/api/v1/namespaces/e2e-tests-watch-pgs9g/configmaps/e2e-watch-test-label-changed,UID:f642ed69-ccdc-11ea-b2c9-0242ac120008,ResourceVersion:2361110,Generation:0,CreationTimestamp:2020-07-23 12:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:06:48.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-pgs9g" for this suite. Jul 23 12:06:54.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:06:54.752: INFO: namespace: e2e-tests-watch-pgs9g, resource: bindings, ignored listing per whitelist Jul 23 12:06:54.809: INFO: namespace e2e-tests-watch-pgs9g deletion completed in 6.08304929s • [SLOW TEST:16.832 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:06:54.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:07:01.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-g4prh" for this suite. Jul 23 12:07:07.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:07:07.853: INFO: namespace: e2e-tests-emptydir-wrapper-g4prh, resource: bindings, ignored listing per whitelist Jul 23 12:07:07.855: INFO: namespace e2e-tests-emptydir-wrapper-g4prh deletion completed in 6.134372225s • [SLOW TEST:13.046 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:07:07.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 23 12:07:16.500: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:16.664: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:18.664: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:18.700: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:20.664: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:20.669: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:22.665: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:22.669: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:24.664: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:24.668: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:26.664: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:26.669: INFO: Pod pod-with-poststart-http-hook still exists Jul 23 12:07:28.665: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 23 12:07:28.669: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:07:28.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lbwz9" for this suite. Jul 23 12:08:02.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:08:02.739: INFO: namespace: e2e-tests-container-lifecycle-hook-lbwz9, resource: bindings, ignored listing per whitelist Jul 23 12:08:02.782: INFO: namespace e2e-tests-container-lifecycle-hook-lbwz9 deletion completed in 34.108002614s • [SLOW TEST:54.927 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:08:02.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jul 23 12:08:02.964: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 23 12:08:03.077: INFO: Waiting for terminating namespaces to be deleted... Jul 23 12:08:03.080: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jul 23 12:08:03.085: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 12:08:03.085: INFO: Container kube-proxy ready: true, restart count 0 Jul 23 12:08:03.085: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 12:08:03.085: INFO: Container kindnet-cni ready: true, restart count 0 Jul 23 12:08:03.085: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jul 23 12:08:03.091: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Jul 23 12:08:03.091: INFO: Container kube-proxy ready: true, restart count 0 Jul 23 12:08:03.091: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Jul 23 12:08:03.091: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1624605448743c61], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:08:04.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nwvwj" for this suite. Jul 23 12:08:10.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:08:10.195: INFO: namespace: e2e-tests-sched-pred-nwvwj, resource: bindings, ignored listing per whitelist Jul 23 12:08:10.203: INFO: namespace e2e-tests-sched-pred-nwvwj deletion completed in 6.088704335s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.421 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:08:10.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 12:08:10.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-88zvx" to be "success or failure" Jul 23 12:08:10.347: INFO: Pod "downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.429966ms Jul 23 12:08:12.407: INFO: Pod "downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079366289s Jul 23 12:08:14.425: INFO: Pod "downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097258228s STEP: Saw pod success Jul 23 12:08:14.425: INFO: Pod "downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 12:08:14.428: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 12:08:14.666: INFO: Waiting for pod downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b to disappear Jul 23 12:08:14.785: INFO: Pod downwardapi-volume-2d02d00c-ccdd-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:08:14.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-88zvx" for this suite. Jul 23 12:08:20.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:08:20.868: INFO: namespace: e2e-tests-projected-88zvx, resource: bindings, ignored listing per whitelist Jul 23 12:08:20.873: INFO: namespace e2e-tests-projected-88zvx deletion completed in 6.083324569s • [SLOW TEST:10.670 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:08:20.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 23 12:08:21.024: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 23 12:08:26.029: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:08:27.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cs57f" for this suite. Jul 23 12:08:33.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:08:33.149: INFO: namespace: e2e-tests-replication-controller-cs57f, resource: bindings, ignored listing per whitelist Jul 23 12:08:33.436: INFO: namespace e2e-tests-replication-controller-cs57f deletion completed in 6.376835276s • [SLOW TEST:12.563 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:08:33.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 23 12:08:33.670: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-8jb9b" to be "success or failure" Jul 23 12:08:33.995: INFO: Pod "downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 324.924856ms Jul 23 12:08:35.999: INFO: Pod "downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328974075s Jul 23 12:08:38.004: INFO: Pod "downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333329723s STEP: Saw pod success Jul 23 12:08:38.004: INFO: Pod "downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b" satisfied condition "success or failure" Jul 23 12:08:38.007: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b container client-container: STEP: delete the pod Jul 23 12:08:38.092: INFO: Waiting for pod downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b to disappear Jul 23 12:08:38.105: INFO: Pod downwardapi-volume-3aebe578-ccdd-11ea-92a5-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:08:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8jb9b" for this suite. Jul 23 12:08:44.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:08:44.181: INFO: namespace: e2e-tests-downward-api-8jb9b, resource: bindings, ignored listing per whitelist Jul 23 12:08:44.215: INFO: namespace e2e-tests-downward-api-8jb9b deletion completed in 6.106988616s • [SLOW TEST:10.778 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:08:44.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 23 12:08:50.838: INFO: Successfully updated pod "pod-update-4142c9ed-ccdd-11ea-92a5-0242ac11000b" STEP: verifying the updated pod is in kubernetes Jul 23 12:08:50.857: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:08:50.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pphsn" for this suite. Jul 23 12:09:12.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:09:12.882: INFO: namespace: e2e-tests-pods-pphsn, resource: bindings, ignored listing per whitelist Jul 23 12:09:12.942: INFO: namespace e2e-tests-pods-pphsn deletion completed in 22.08092687s • [SLOW TEST:28.727 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:09:12.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:09:17.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-wc2n7" for this suite. Jul 23 12:10:11.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:10:11.133: INFO: namespace: e2e-tests-kubelet-test-wc2n7, resource: bindings, ignored listing per whitelist Jul 23 12:10:11.193: INFO: namespace e2e-tests-kubelet-test-wc2n7 deletion completed in 54.104169082s • [SLOW TEST:58.251 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:10:11.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gc2hf STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 23 12:10:11.318: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 23 12:10:35.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostName&protocol=udp&host=10.244.1.5&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gc2hf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 12:10:35.801: INFO: >>> kubeConfig: /root/.kube/config I0723 12:10:35.837230 6 log.go:172] (0xc00186a4d0) (0xc00039e780) Create stream I0723 12:10:35.837273 6 log.go:172] (0xc00186a4d0) (0xc00039e780) Stream added, broadcasting: 1 I0723 12:10:35.840860 6 log.go:172] (0xc00186a4d0) Reply frame received for 1 I0723 12:10:35.840983 6 log.go:172] (0xc00186a4d0) (0xc001fbc000) Create stream I0723 12:10:35.840998 6 log.go:172] (0xc00186a4d0) (0xc001fbc000) Stream added, broadcasting: 3 I0723 12:10:35.842176 6 log.go:172] (0xc00186a4d0) Reply frame received for 3 I0723 12:10:35.842225 6 log.go:172] (0xc00186a4d0) (0xc001fbc0a0) Create stream I0723 12:10:35.842256 6 log.go:172] (0xc00186a4d0) (0xc001fbc0a0) Stream added, broadcasting: 5 I0723 12:10:35.843364 6 log.go:172] (0xc00186a4d0) Reply frame received for 5 I0723 12:10:35.907848 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 12:10:35.907880 6 log.go:172] (0xc001fbc000) (3) Data frame handling I0723 12:10:35.907901 6 log.go:172] (0xc001fbc000) (3) Data frame sent I0723 12:10:35.908523 6 log.go:172] (0xc00186a4d0) Data frame received for 3 I0723 12:10:35.908556 6 log.go:172] (0xc001fbc000) (3) Data frame handling I0723 12:10:35.908715 6 log.go:172] (0xc00186a4d0) Data frame received for 5 I0723 12:10:35.908830 6 log.go:172] (0xc001fbc0a0) (5) Data frame handling I0723 12:10:35.910847 6 log.go:172] (0xc00186a4d0) Data frame received for 1 I0723 12:10:35.910867 6 log.go:172] (0xc00039e780) (1) Data frame handling I0723 12:10:35.910879 6 log.go:172] (0xc00039e780) (1) Data frame sent I0723 12:10:35.910900 6 log.go:172] (0xc00186a4d0) (0xc00039e780) Stream removed, broadcasting: 1 I0723 12:10:35.910939 6 log.go:172] (0xc00186a4d0) Go away received I0723 12:10:35.911010 6 log.go:172] (0xc00186a4d0) (0xc00039e780) Stream removed, broadcasting: 1 I0723 12:10:35.911058 6 log.go:172] (0xc00186a4d0) (0xc001fbc000) Stream removed, broadcasting: 3 I0723 12:10:35.911078 6 log.go:172] (0xc00186a4d0) (0xc001fbc0a0) Stream removed, broadcasting: 5 Jul 23 12:10:35.911: INFO: Waiting for endpoints: map[] Jul 23 12:10:35.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostName&protocol=udp&host=10.244.2.86&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gc2hf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 23 12:10:35.914: INFO: >>> kubeConfig: /root/.kube/config I0723 12:10:35.947927 6 log.go:172] (0xc00102e2c0) (0xc001191b80) Create stream I0723 12:10:35.947955 6 log.go:172] (0xc00102e2c0) (0xc001191b80) Stream added, broadcasting: 1 I0723 12:10:35.950211 6 log.go:172] (0xc00102e2c0) Reply frame received for 1 I0723 12:10:35.950250 6 log.go:172] (0xc00102e2c0) (0xc001fbc1e0) Create stream I0723 12:10:35.950264 6 log.go:172] (0xc00102e2c0) (0xc001fbc1e0) Stream added, broadcasting: 3 I0723 12:10:35.951030 6 log.go:172] (0xc00102e2c0) Reply frame received for 3 I0723 12:10:35.951056 6 log.go:172] (0xc00102e2c0) (0xc001fbc3c0) Create stream I0723 12:10:35.951066 6 log.go:172] (0xc00102e2c0) (0xc001fbc3c0) Stream added, broadcasting: 5 I0723 12:10:35.951986 6 log.go:172] (0xc00102e2c0) Reply frame received for 5 I0723 12:10:36.011313 6 log.go:172] (0xc00102e2c0) Data frame received for 3 I0723 12:10:36.011337 6 log.go:172] (0xc001fbc1e0) (3) Data frame handling I0723 12:10:36.011351 6 log.go:172] (0xc001fbc1e0) (3) Data frame sent I0723 12:10:36.012221 6 log.go:172] (0xc00102e2c0) Data frame received for 5 I0723 12:10:36.012247 6 log.go:172] (0xc001fbc3c0) (5) Data frame handling I0723 12:10:36.012420 6 log.go:172] (0xc00102e2c0) Data frame received for 3 I0723 12:10:36.012434 6 log.go:172] (0xc001fbc1e0) (3) Data frame handling I0723 12:10:36.014478 6 log.go:172] (0xc00102e2c0) Data frame received for 1 I0723 12:10:36.014534 6 log.go:172] (0xc001191b80) (1) Data frame handling I0723 12:10:36.014575 6 log.go:172] (0xc001191b80) (1) Data frame sent I0723 12:10:36.014601 6 log.go:172] (0xc00102e2c0) (0xc001191b80) Stream removed, broadcasting: 1 I0723 12:10:36.014639 6 log.go:172] (0xc00102e2c0) Go away received I0723 12:10:36.014774 6 log.go:172] (0xc00102e2c0) (0xc001191b80) Stream removed, broadcasting: 1 I0723 12:10:36.014807 6 log.go:172] (0xc00102e2c0) (0xc001fbc1e0) Stream removed, broadcasting: 3 I0723 12:10:36.014826 6 log.go:172] (0xc00102e2c0) (0xc001fbc3c0) Stream removed, broadcasting: 5 Jul 23 12:10:36.014: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:10:36.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gc2hf" for this suite. Jul 23 12:11:00.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:11:00.046: INFO: namespace: e2e-tests-pod-network-test-gc2hf, resource: bindings, ignored listing per whitelist Jul 23 12:11:00.108: INFO: namespace e2e-tests-pod-network-test-gc2hf deletion completed in 24.088829387s • [SLOW TEST:48.915 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:11:00.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 23 12:11:00.175: INFO: namespace e2e-tests-kubectl-brfrx Jul 23 12:11:00.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-brfrx' Jul 23 12:11:03.018: INFO: stderr: "" Jul 23 12:11:03.018: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 23 12:11:04.023: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:11:04.023: INFO: Found 0 / 1 Jul 23 12:11:05.051: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:11:05.051: INFO: Found 0 / 1 Jul 23 12:11:06.023: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:11:06.023: INFO: Found 0 / 1 Jul 23 12:11:07.023: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:11:07.023: INFO: Found 1 / 1 Jul 23 12:11:07.023: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 23 12:11:07.027: INFO: Selector matched 1 pods for map[app:redis] Jul 23 12:11:07.027: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 23 12:11:07.027: INFO: wait on redis-master startup in e2e-tests-kubectl-brfrx Jul 23 12:11:07.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9n8n5 redis-master --namespace=e2e-tests-kubectl-brfrx' Jul 23 12:11:07.142: INFO: stderr: "" Jul 23 12:11:07.143: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jul 12:11:05.947 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jul 12:11:05.947 # Server started, Redis version 3.2.12\n1:M 23 Jul 12:11:05.947 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jul 12:11:05.947 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 23 12:11:07.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-brfrx' Jul 23 12:11:07.329: INFO: stderr: "" Jul 23 12:11:07.329: INFO: stdout: "service/rm2 exposed\n" Jul 23 12:11:07.337: INFO: Service rm2 in namespace e2e-tests-kubectl-brfrx found. STEP: exposing service Jul 23 12:11:09.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-brfrx' Jul 23 12:11:09.492: INFO: stderr: "" Jul 23 12:11:09.492: INFO: stdout: "service/rm3 exposed\n" Jul 23 12:11:09.509: INFO: Service rm3 in namespace e2e-tests-kubectl-brfrx found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 23 12:11:11.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-brfrx" for this suite. Jul 23 12:11:33.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 23 12:11:33.591: INFO: namespace: e2e-tests-kubectl-brfrx, resource: bindings, ignored listing per whitelist Jul 23 12:11:33.626: INFO: namespace e2e-tests-kubectl-brfrx deletion completed in 22.105763371s • [SLOW TEST:33.518 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 23 12:11:33.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 23 12:11:33.769: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 23 12:11:40.120: INFO: Waiting up to 5m0s for pod "pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-9k872" to be "success or failure"
Jul 23 12:11:40.123: INFO: Pod "pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228379ms
Jul 23 12:11:42.127: INFO: Pod "pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006732583s
Jul 23 12:11:44.131: INFO: Pod "pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010817128s
STEP: Saw pod success
Jul 23 12:11:44.131: INFO: Pod "pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:11:44.134: INFO: Trying to get logs from node hunter-worker2 pod pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:11:44.155: INFO: Waiting for pod pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b to disappear
Jul 23 12:11:44.159: INFO: Pod pod-aa0e6a7f-ccdd-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:11:44.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9k872" for this suite.
Jul 23 12:11:50.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:11:50.206: INFO: namespace: e2e-tests-emptydir-9k872, resource: bindings, ignored listing per whitelist
Jul 23 12:11:50.248: INFO: namespace e2e-tests-emptydir-9k872 deletion completed in 6.085147728s

• [SLOW TEST:10.281 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:11:50.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:11:50.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-29bxg" to be "success or failure"
Jul 23 12:11:50.391: INFO: Pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.27276ms
Jul 23 12:11:52.394: INFO: Pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043574655s
Jul 23 12:11:54.398: INFO: Pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.047278328s
Jul 23 12:11:56.402: INFO: Pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051248875s
STEP: Saw pod success
Jul 23 12:11:56.402: INFO: Pod "downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:11:56.404: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:11:56.429: INFO: Waiting for pod downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b to disappear
Jul 23 12:11:56.433: INFO: Pod downwardapi-volume-b0243a1e-ccdd-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:11:56.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-29bxg" for this suite.
Jul 23 12:12:02.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:12:02.499: INFO: namespace: e2e-tests-projected-29bxg, resource: bindings, ignored listing per whitelist
Jul 23 12:12:02.537: INFO: namespace e2e-tests-projected-29bxg deletion completed in 6.100644605s

• [SLOW TEST:12.289 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:12:02.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xjr4v
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jul 23 12:12:02.679: INFO: Found 0 stateful pods, waiting for 3
Jul 23 12:12:12.730: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:12:12.730: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:12:12.730: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 23 12:12:22.684: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:12:22.684: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:12:22.684: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:12:22.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjr4v ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:12:22.970: INFO: stderr: "I0723 12:12:22.833204    1230 log.go:172] (0xc000144840) (0xc000782640) Create stream\nI0723 12:12:22.833268    1230 log.go:172] (0xc000144840) (0xc000782640) Stream added, broadcasting: 1\nI0723 12:12:22.836048    1230 log.go:172] (0xc000144840) Reply frame received for 1\nI0723 12:12:22.836118    1230 log.go:172] (0xc000144840) (0xc000670f00) Create stream\nI0723 12:12:22.836148    1230 log.go:172] (0xc000144840) (0xc000670f00) Stream added, broadcasting: 3\nI0723 12:12:22.837415    1230 log.go:172] (0xc000144840) Reply frame received for 3\nI0723 12:12:22.837447    1230 log.go:172] (0xc000144840) (0xc0007826e0) Create stream\nI0723 12:12:22.837458    1230 log.go:172] (0xc000144840) (0xc0007826e0) Stream added, broadcasting: 5\nI0723 12:12:22.838516    1230 log.go:172] (0xc000144840) Reply frame received for 5\nI0723 12:12:22.962791    1230 log.go:172] (0xc000144840) Data frame received for 5\nI0723 12:12:22.962857    1230 log.go:172] (0xc0007826e0) (5) Data frame handling\nI0723 12:12:22.962897    1230 log.go:172] (0xc000144840) Data frame received for 3\nI0723 12:12:22.962921    1230 log.go:172] (0xc000670f00) (3) Data frame handling\nI0723 12:12:22.962941    1230 log.go:172] (0xc000670f00) (3) Data frame sent\nI0723 12:12:22.962951    1230 log.go:172] (0xc000144840) Data frame received for 3\nI0723 12:12:22.962969    1230 log.go:172] (0xc000670f00) (3) Data frame handling\nI0723 12:12:22.964562    1230 log.go:172] (0xc000144840) Data frame received for 1\nI0723 12:12:22.964584    1230 log.go:172] (0xc000782640) (1) Data frame handling\nI0723 12:12:22.964592    1230 log.go:172] (0xc000782640) (1) Data frame sent\nI0723 12:12:22.964604    1230 log.go:172] (0xc000144840) (0xc000782640) Stream removed, broadcasting: 1\nI0723 12:12:22.964621    1230 log.go:172] (0xc000144840) Go away received\nI0723 12:12:22.964933    1230 log.go:172] (0xc000144840) (0xc000782640) Stream removed, broadcasting: 1\nI0723 12:12:22.964970    1230 log.go:172] (0xc000144840) (0xc000670f00) Stream removed, broadcasting: 3\nI0723 12:12:22.964983    1230 log.go:172] (0xc000144840) (0xc0007826e0) Stream removed, broadcasting: 5\n"
Jul 23 12:12:22.970: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:12:22.970: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 23 12:12:33.034: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 23 12:12:43.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjr4v ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:12:43.367: INFO: stderr: "I0723 12:12:43.264206    1252 log.go:172] (0xc000138840) (0xc000756640) Create stream\nI0723 12:12:43.264261    1252 log.go:172] (0xc000138840) (0xc000756640) Stream added, broadcasting: 1\nI0723 12:12:43.267049    1252 log.go:172] (0xc000138840) Reply frame received for 1\nI0723 12:12:43.267100    1252 log.go:172] (0xc000138840) (0xc0001f9040) Create stream\nI0723 12:12:43.267115    1252 log.go:172] (0xc000138840) (0xc0001f9040) Stream added, broadcasting: 3\nI0723 12:12:43.268296    1252 log.go:172] (0xc000138840) Reply frame received for 3\nI0723 12:12:43.268369    1252 log.go:172] (0xc000138840) (0xc0001f6000) Create stream\nI0723 12:12:43.268399    1252 log.go:172] (0xc000138840) (0xc0001f6000) Stream added, broadcasting: 5\nI0723 12:12:43.269728    1252 log.go:172] (0xc000138840) Reply frame received for 5\nI0723 12:12:43.360005    1252 log.go:172] (0xc000138840) Data frame received for 5\nI0723 12:12:43.360027    1252 log.go:172] (0xc0001f6000) (5) Data frame handling\nI0723 12:12:43.360072    1252 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:12:43.360133    1252 log.go:172] (0xc0001f9040) (3) Data frame handling\nI0723 12:12:43.360163    1252 log.go:172] (0xc0001f9040) (3) Data frame sent\nI0723 12:12:43.360185    1252 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:12:43.360213    1252 log.go:172] (0xc0001f9040) (3) Data frame handling\nI0723 12:12:43.362410    1252 log.go:172] (0xc000138840) Data frame received for 1\nI0723 12:12:43.362424    1252 log.go:172] (0xc000756640) (1) Data frame handling\nI0723 12:12:43.362432    1252 log.go:172] (0xc000756640) (1) Data frame sent\nI0723 12:12:43.362444    1252 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0723 12:12:43.362457    1252 log.go:172] (0xc000138840) Go away received\nI0723 12:12:43.362725    1252 log.go:172] (0xc000138840) (0xc000756640) Stream removed, broadcasting: 1\nI0723 12:12:43.362741    1252 log.go:172] (0xc000138840) (0xc0001f9040) Stream removed, broadcasting: 3\nI0723 12:12:43.362748    1252 log.go:172] (0xc000138840) (0xc0001f6000) Stream removed, broadcasting: 5\n"
Jul 23 12:12:43.368: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:12:43.368: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:12:53.387: INFO: Waiting for StatefulSet e2e-tests-statefulset-xjr4v/ss2 to complete update
Jul 23 12:12:53.387: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 23 12:12:53.387: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 23 12:12:53.387: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 23 12:13:03.393: INFO: Waiting for StatefulSet e2e-tests-statefulset-xjr4v/ss2 to complete update
Jul 23 12:13:03.394: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 23 12:13:03.394: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 23 12:13:13.394: INFO: Waiting for StatefulSet e2e-tests-statefulset-xjr4v/ss2 to complete update
Jul 23 12:13:13.394: INFO: Waiting for Pod e2e-tests-statefulset-xjr4v/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jul 23 12:13:23.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjr4v ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:13:23.651: INFO: stderr: "I0723 12:13:23.540551    1275 log.go:172] (0xc00071c370) (0xc000609360) Create stream\nI0723 12:13:23.540874    1275 log.go:172] (0xc00071c370) (0xc000609360) Stream added, broadcasting: 1\nI0723 12:13:23.542734    1275 log.go:172] (0xc00071c370) Reply frame received for 1\nI0723 12:13:23.542803    1275 log.go:172] (0xc00071c370) (0xc00064e000) Create stream\nI0723 12:13:23.542830    1275 log.go:172] (0xc00071c370) (0xc00064e000) Stream added, broadcasting: 3\nI0723 12:13:23.543526    1275 log.go:172] (0xc00071c370) Reply frame received for 3\nI0723 12:13:23.543567    1275 log.go:172] (0xc00071c370) (0xc00069a000) Create stream\nI0723 12:13:23.543582    1275 log.go:172] (0xc00071c370) (0xc00069a000) Stream added, broadcasting: 5\nI0723 12:13:23.544282    1275 log.go:172] (0xc00071c370) Reply frame received for 5\nI0723 12:13:23.646107    1275 log.go:172] (0xc00071c370) Data frame received for 5\nI0723 12:13:23.646172    1275 log.go:172] (0xc00069a000) (5) Data frame handling\nI0723 12:13:23.646215    1275 log.go:172] (0xc00071c370) Data frame received for 3\nI0723 12:13:23.646253    1275 log.go:172] (0xc00064e000) (3) Data frame handling\nI0723 12:13:23.646282    1275 log.go:172] (0xc00064e000) (3) Data frame sent\nI0723 12:13:23.646315    1275 log.go:172] (0xc00071c370) Data frame received for 3\nI0723 12:13:23.646338    1275 log.go:172] (0xc00064e000) (3) Data frame handling\nI0723 12:13:23.647779    1275 log.go:172] (0xc00071c370) Data frame received for 1\nI0723 12:13:23.647799    1275 log.go:172] (0xc000609360) (1) Data frame handling\nI0723 12:13:23.647812    1275 log.go:172] (0xc000609360) (1) Data frame sent\nI0723 12:13:23.647835    1275 log.go:172] (0xc00071c370) (0xc000609360) Stream removed, broadcasting: 1\nI0723 12:13:23.647878    1275 log.go:172] (0xc00071c370) Go away received\nI0723 12:13:23.647991    1275 log.go:172] (0xc00071c370) (0xc000609360) Stream removed, broadcasting: 1\nI0723 12:13:23.648014    1275 log.go:172] (0xc00071c370) (0xc00064e000) Stream removed, broadcasting: 3\nI0723 12:13:23.648033    1275 log.go:172] (0xc00071c370) (0xc00069a000) Stream removed, broadcasting: 5\n"
Jul 23 12:13:23.652: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:13:23.652: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:13:33.684: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 23 12:13:43.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xjr4v ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:13:43.923: INFO: stderr: "I0723 12:13:43.846004    1297 log.go:172] (0xc00084e2c0) (0xc00073a640) Create stream\nI0723 12:13:43.846084    1297 log.go:172] (0xc00084e2c0) (0xc00073a640) Stream added, broadcasting: 1\nI0723 12:13:43.848828    1297 log.go:172] (0xc00084e2c0) Reply frame received for 1\nI0723 12:13:43.848888    1297 log.go:172] (0xc00084e2c0) (0xc0005f0be0) Create stream\nI0723 12:13:43.848917    1297 log.go:172] (0xc00084e2c0) (0xc0005f0be0) Stream added, broadcasting: 3\nI0723 12:13:43.849920    1297 log.go:172] (0xc00084e2c0) Reply frame received for 3\nI0723 12:13:43.849967    1297 log.go:172] (0xc00084e2c0) (0xc0006b8000) Create stream\nI0723 12:13:43.849979    1297 log.go:172] (0xc00084e2c0) (0xc0006b8000) Stream added, broadcasting: 5\nI0723 12:13:43.851934    1297 log.go:172] (0xc00084e2c0) Reply frame received for 5\nI0723 12:13:43.916974    1297 log.go:172] (0xc00084e2c0) Data frame received for 5\nI0723 12:13:43.917031    1297 log.go:172] (0xc0006b8000) (5) Data frame handling\nI0723 12:13:43.917060    1297 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0723 12:13:43.917071    1297 log.go:172] (0xc0005f0be0) (3) Data frame handling\nI0723 12:13:43.917084    1297 log.go:172] (0xc0005f0be0) (3) Data frame sent\nI0723 12:13:43.917095    1297 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0723 12:13:43.917105    1297 log.go:172] (0xc0005f0be0) (3) Data frame handling\nI0723 12:13:43.918797    1297 log.go:172] (0xc00084e2c0) Data frame received for 1\nI0723 12:13:43.918840    1297 log.go:172] (0xc00073a640) (1) Data frame handling\nI0723 12:13:43.918852    1297 log.go:172] (0xc00073a640) (1) Data frame sent\nI0723 12:13:43.918865    1297 log.go:172] (0xc00084e2c0) (0xc00073a640) Stream removed, broadcasting: 1\nI0723 12:13:43.918889    1297 log.go:172] (0xc00084e2c0) Go away received\nI0723 12:13:43.919188    1297 log.go:172] (0xc00084e2c0) (0xc00073a640) Stream removed, broadcasting: 1\nI0723 12:13:43.919219    1297 log.go:172] (0xc00084e2c0) (0xc0005f0be0) Stream removed, broadcasting: 3\nI0723 12:13:43.919237    1297 log.go:172] (0xc00084e2c0) (0xc0006b8000) Stream removed, broadcasting: 5\n"
Jul 23 12:13:43.924: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:13:43.924: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 23 12:14:03.941: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xjr4v
Jul 23 12:14:03.969: INFO: Scaling statefulset ss2 to 0
Jul 23 12:14:23.996: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:14:23.999: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:14:24.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xjr4v" for this suite.
Jul 23 12:14:32.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:14:32.147: INFO: namespace: e2e-tests-statefulset-xjr4v, resource: bindings, ignored listing per whitelist
Jul 23 12:14:32.183: INFO: namespace e2e-tests-statefulset-xjr4v deletion completed in 8.13195325s

• [SLOW TEST:149.645 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:14:32.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 23 12:14:32.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6pfzc'
Jul 23 12:14:32.385: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 23 12:14:32.385: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul 23 12:14:34.420: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-zldrn]
Jul 23 12:14:34.420: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-zldrn" in namespace "e2e-tests-kubectl-6pfzc" to be "running and ready"
Jul 23 12:14:34.423: INFO: Pod "e2e-test-nginx-rc-zldrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715024ms
Jul 23 12:14:36.444: INFO: Pod "e2e-test-nginx-rc-zldrn": Phase="Running", Reason="", readiness=true. Elapsed: 2.023695529s
Jul 23 12:14:36.444: INFO: Pod "e2e-test-nginx-rc-zldrn" satisfied condition "running and ready"
Jul 23 12:14:36.444: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-zldrn]
Jul 23 12:14:36.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6pfzc'
Jul 23 12:14:36.560: INFO: stderr: ""
Jul 23 12:14:36.560: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jul 23 12:14:36.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6pfzc'
Jul 23 12:14:36.679: INFO: stderr: ""
Jul 23 12:14:36.679: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:14:36.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6pfzc" for this suite.
Jul 23 12:14:58.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:14:58.789: INFO: namespace: e2e-tests-kubectl-6pfzc, resource: bindings, ignored listing per whitelist
Jul 23 12:14:58.816: INFO: namespace e2e-tests-kubectl-6pfzc deletion completed in 22.132246842s

• [SLOW TEST:26.633 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:14:58.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul 23 12:14:58.941: INFO: Waiting up to 5m0s for pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-var-expansion-cwmpb" to be "success or failure"
Jul 23 12:14:58.949: INFO: Pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208086ms
Jul 23 12:15:00.952: INFO: Pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011535492s
Jul 23 12:15:02.956: INFO: Pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015039377s
Jul 23 12:15:04.960: INFO: Pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019380511s
STEP: Saw pod success
Jul 23 12:15:04.960: INFO: Pod "var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:15:04.964: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 23 12:15:05.008: INFO: Waiting for pod var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:15:05.028: INFO: Pod var-expansion-208f5d3f-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:15:05.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-cwmpb" for this suite.
Jul 23 12:15:11.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:15:11.131: INFO: namespace: e2e-tests-var-expansion-cwmpb, resource: bindings, ignored listing per whitelist
Jul 23 12:15:11.151: INFO: namespace e2e-tests-var-expansion-cwmpb deletion completed in 6.119642289s

• [SLOW TEST:12.335 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:15:11.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 23 12:15:19.315: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:19.339: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:21.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:21.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:23.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:23.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:25.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:25.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:27.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:27.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:29.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:29.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:31.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:31.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:33.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:33.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:35.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:35.505: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:37.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:37.342: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:39.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:39.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:41.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:41.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:43.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:43.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:45.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:45.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:47.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:47.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 23 12:15:49.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 23 12:15:49.343: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:15:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lpd9q" for this suite.
Jul 23 12:16:13.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:16:13.458: INFO: namespace: e2e-tests-container-lifecycle-hook-lpd9q, resource: bindings, ignored listing per whitelist
Jul 23 12:16:13.478: INFO: namespace e2e-tests-container-lifecycle-hook-lpd9q deletion completed in 24.130665685s

• [SLOW TEST:62.326 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:16:13.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:16:13.595: INFO: Creating ReplicaSet my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b
Jul 23 12:16:13.628: INFO: Pod name my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b: Found 0 pods out of 1
Jul 23 12:16:18.632: INFO: Pod name my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b: Found 1 pods out of 1
Jul 23 12:16:18.632: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b" is running
Jul 23 12:16:18.635: INFO: Pod "my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b-sc2hc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:16:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:16:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:16:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:16:13 +0000 UTC Reason: Message:}])
Jul 23 12:16:18.635: INFO: Trying to dial the pod
Jul 23 12:16:23.646: INFO: Controller my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b-sc2hc]: "my-hostname-basic-4d108249-ccde-11ea-92a5-0242ac11000b-sc2hc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:16:23.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-xdh5f" for this suite.
Jul 23 12:16:29.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:16:29.782: INFO: namespace: e2e-tests-replicaset-xdh5f, resource: bindings, ignored listing per whitelist
Jul 23 12:16:29.805: INFO: namespace e2e-tests-replicaset-xdh5f deletion completed in 6.155676106s

• [SLOW TEST:16.327 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:16:29.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 23 12:16:34.437: INFO: Successfully updated pod "labelsupdate56c51a93-ccde-11ea-92a5-0242ac11000b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:16:38.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8cg9h" for this suite.
Jul 23 12:17:00.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:17:00.519: INFO: namespace: e2e-tests-projected-8cg9h, resource: bindings, ignored listing per whitelist
Jul 23 12:17:00.556: INFO: namespace e2e-tests-projected-8cg9h deletion completed in 22.088878623s

• [SLOW TEST:30.750 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:17:00.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:17:00.688: INFO: Waiting up to 5m0s for pod "downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-vq9qs" to be "success or failure"
Jul 23 12:17:00.692: INFO: Pod "downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820239ms
Jul 23 12:17:02.757: INFO: Pod "downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068933642s
Jul 23 12:17:04.767: INFO: Pod "downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079146926s
STEP: Saw pod success
Jul 23 12:17:04.767: INFO: Pod "downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:17:04.769: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:17:04.805: INFO: Waiting for pod downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:17:04.814: INFO: Pod downwardapi-volume-691fe0bd-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:17:04.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vq9qs" for this suite.
Jul 23 12:17:10.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:17:10.924: INFO: namespace: e2e-tests-downward-api-vq9qs, resource: bindings, ignored listing per whitelist
Jul 23 12:17:10.959: INFO: namespace e2e-tests-downward-api-vq9qs deletion completed in 6.137959111s

• [SLOW TEST:10.403 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:17:10.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-6f4cbf91-ccde-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:17:11.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-f6rfg" to be "success or failure"
Jul 23 12:17:11.054: INFO: Pod "pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030213ms
Jul 23 12:17:13.194: INFO: Pod "pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143712739s
Jul 23 12:17:15.198: INFO: Pod "pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147333656s
STEP: Saw pod success
Jul 23 12:17:15.198: INFO: Pod "pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:17:15.200: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 23 12:17:15.279: INFO: Waiting for pod pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:17:15.294: INFO: Pod pod-configmaps-6f4e42f1-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:17:15.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f6rfg" for this suite.
Jul 23 12:17:21.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:17:21.371: INFO: namespace: e2e-tests-configmap-f6rfg, resource: bindings, ignored listing per whitelist
Jul 23 12:17:21.377: INFO: namespace e2e-tests-configmap-f6rfg deletion completed in 6.07951427s

• [SLOW TEST:10.418 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:17:21.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 23 12:17:21.498: INFO: Waiting up to 5m0s for pod "pod-7588be95-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-ktrrx" to be "success or failure"
Jul 23 12:17:21.514: INFO: Pod "pod-7588be95-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.740529ms
Jul 23 12:17:23.572: INFO: Pod "pod-7588be95-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073788087s
Jul 23 12:17:25.576: INFO: Pod "pod-7588be95-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077991787s
STEP: Saw pod success
Jul 23 12:17:25.576: INFO: Pod "pod-7588be95-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:17:25.579: INFO: Trying to get logs from node hunter-worker2 pod pod-7588be95-ccde-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:17:25.705: INFO: Waiting for pod pod-7588be95-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:17:25.728: INFO: Pod pod-7588be95-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:17:25.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ktrrx" for this suite.
Jul 23 12:17:31.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:17:31.808: INFO: namespace: e2e-tests-emptydir-ktrrx, resource: bindings, ignored listing per whitelist
Jul 23 12:17:32.395: INFO: namespace e2e-tests-emptydir-ktrrx deletion completed in 6.663374754s

• [SLOW TEST:11.018 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:17:32.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul 23 12:17:32.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 23 12:17:32.697: INFO: stderr: ""
Jul 23 12:17:32.697: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:17:32.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ttqn7" for this suite.
Jul 23 12:17:38.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:17:38.818: INFO: namespace: e2e-tests-kubectl-ttqn7, resource: bindings, ignored listing per whitelist
Jul 23 12:17:38.831: INFO: namespace e2e-tests-kubectl-ttqn7 deletion completed in 6.130427638s

• [SLOW TEST:6.436 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:17:38.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-7ff23e0b-ccde-11ea-92a5-0242ac11000b
STEP: Creating secret with name s-test-opt-upd-7ff23e99-ccde-11ea-92a5-0242ac11000b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7ff23e0b-ccde-11ea-92a5-0242ac11000b
STEP: Updating secret s-test-opt-upd-7ff23e99-ccde-11ea-92a5-0242ac11000b
STEP: Creating secret with name s-test-opt-create-7ff23ecf-ccde-11ea-92a5-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:19:03.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d4g8g" for this suite.
Jul 23 12:19:27.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:19:27.444: INFO: namespace: e2e-tests-projected-d4g8g, resource: bindings, ignored listing per whitelist
Jul 23 12:19:27.506: INFO: namespace e2e-tests-projected-d4g8g deletion completed in 24.093925467s

• [SLOW TEST:108.674 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:19:27.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:19:27.616: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 23 12:19:41.562: INFO: 0 pods remaining
Jul 23 12:19:41.562: INFO: 0 pods has nil DeletionTimestamp
Jul 23 12:19:41.562: INFO: 
STEP: Gathering metrics
W0723 12:19:42.114679       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 23 12:19:42.114: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:19:42.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tll5m" for this suite.
Jul 23 12:19:48.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:19:48.216: INFO: namespace: e2e-tests-gc-tll5m, resource: bindings, ignored listing per whitelist
Jul 23 12:19:48.234: INFO: namespace e2e-tests-gc-tll5m deletion completed in 6.115523689s

• [SLOW TEST:14.441 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:19:48.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-vkdw6/configmap-test-cd15ce8e-ccde-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:19:48.409: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-vkdw6" to be "success or failure"
Jul 23 12:19:48.413: INFO: Pod "pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.467729ms
Jul 23 12:19:50.544: INFO: Pod "pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134795307s
Jul 23 12:19:52.548: INFO: Pod "pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138446571s
STEP: Saw pod success
Jul 23 12:19:52.548: INFO: Pod "pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:19:52.551: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b container env-test: 
STEP: delete the pod
Jul 23 12:19:52.576: INFO: Waiting for pod pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:19:52.586: INFO: Pod pod-configmaps-cd1834f1-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:19:52.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vkdw6" for this suite.
Jul 23 12:19:58.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:19:58.665: INFO: namespace: e2e-tests-configmap-vkdw6, resource: bindings, ignored listing per whitelist
Jul 23 12:19:58.701: INFO: namespace e2e-tests-configmap-vkdw6 deletion completed in 6.111847086s

• [SLOW TEST:10.467 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:19:58.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d3483477-ccde-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:19:58.793: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-nw6hm" to be "success or failure"
Jul 23 12:19:58.797: INFO: Pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902905ms
Jul 23 12:20:00.801: INFO: Pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007717202s
Jul 23 12:20:02.805: INFO: Pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011778353s
Jul 23 12:20:04.810: INFO: Pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01619084s
STEP: Saw pod success
Jul 23 12:20:04.810: INFO: Pod "pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:20:04.813: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 23 12:20:04.847: INFO: Waiting for pod pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:20:04.877: INFO: Pod pod-projected-secrets-d349daab-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:20:04.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nw6hm" for this suite.
Jul 23 12:20:10.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:20:10.933: INFO: namespace: e2e-tests-projected-nw6hm, resource: bindings, ignored listing per whitelist
Jul 23 12:20:10.974: INFO: namespace e2e-tests-projected-nw6hm deletion completed in 6.093446302s

• [SLOW TEST:12.273 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:20:10.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-p8dj7
Jul 23 12:20:15.081: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-p8dj7
STEP: checking the pod's current state and verifying that restartCount is present
Jul 23 12:20:15.084: INFO: Initial restart count of pod liveness-http is 0
Jul 23 12:20:33.120: INFO: Restart count of pod e2e-tests-container-probe-p8dj7/liveness-http is now 1 (18.036457882s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:20:33.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-p8dj7" for this suite.
Jul 23 12:20:39.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:20:39.233: INFO: namespace: e2e-tests-container-probe-p8dj7, resource: bindings, ignored listing per whitelist
Jul 23 12:20:39.256: INFO: namespace e2e-tests-container-probe-p8dj7 deletion completed in 6.09041021s

• [SLOW TEST:28.282 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:20:39.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-eb74c75e-ccde-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:20:39.355: INFO: Waiting up to 5m0s for pod "pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-7sghj" to be "success or failure"
Jul 23 12:20:39.425: INFO: Pod "pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 70.462703ms
Jul 23 12:20:41.429: INFO: Pod "pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074544761s
Jul 23 12:20:43.433: INFO: Pod "pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078554622s
STEP: Saw pod success
Jul 23 12:20:43.433: INFO: Pod "pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:20:43.436: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 23 12:20:43.578: INFO: Waiting for pod pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b to disappear
Jul 23 12:20:43.587: INFO: Pod pod-secrets-eb75879e-ccde-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:20:43.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7sghj" for this suite.
Jul 23 12:20:49.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:20:49.705: INFO: namespace: e2e-tests-secrets-7sghj, resource: bindings, ignored listing per whitelist
Jul 23 12:20:49.738: INFO: namespace e2e-tests-secrets-7sghj deletion completed in 6.147540482s

• [SLOW TEST:10.481 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:20:49.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:21:21.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-k4hgl" for this suite.
Jul 23 12:21:27.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:21:27.981: INFO: namespace: e2e-tests-container-runtime-k4hgl, resource: bindings, ignored listing per whitelist
Jul 23 12:21:28.017: INFO: namespace e2e-tests-container-runtime-k4hgl deletion completed in 6.094961882s

• [SLOW TEST:38.279 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:21:28.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-08881640-ccdf-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:21:28.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b" in namespace "e2e-tests-configmap-4h6ws" to be "success or failure"
Jul 23 12:21:28.135: INFO: Pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45918ms
Jul 23 12:21:30.147: INFO: Pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015585722s
Jul 23 12:21:32.151: INFO: Pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.019772127s
Jul 23 12:21:34.155: INFO: Pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023387164s
STEP: Saw pod success
Jul 23 12:21:34.155: INFO: Pod "pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:21:34.157: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 23 12:21:34.218: INFO: Waiting for pod pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b to disappear
Jul 23 12:21:34.229: INFO: Pod pod-configmaps-0889a5db-ccdf-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:21:34.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4h6ws" for this suite.
Jul 23 12:21:40.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:21:40.273: INFO: namespace: e2e-tests-configmap-4h6ws, resource: bindings, ignored listing per whitelist
Jul 23 12:21:40.322: INFO: namespace e2e-tests-configmap-4h6ws deletion completed in 6.090092947s

• [SLOW TEST:12.305 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:21:40.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-j2c42
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-j2c42
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-j2c42
Jul 23 12:21:40.489: INFO: Found 0 stateful pods, waiting for 1
Jul 23 12:21:50.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul 23 12:21:50.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:21:50.735: INFO: stderr: "I0723 12:21:50.629022    1404 log.go:172] (0xc00071e420) (0xc00073c640) Create stream\nI0723 12:21:50.629108    1404 log.go:172] (0xc00071e420) (0xc00073c640) Stream added, broadcasting: 1\nI0723 12:21:50.631833    1404 log.go:172] (0xc00071e420) Reply frame received for 1\nI0723 12:21:50.631898    1404 log.go:172] (0xc00071e420) (0xc0006b8c80) Create stream\nI0723 12:21:50.631912    1404 log.go:172] (0xc00071e420) (0xc0006b8c80) Stream added, broadcasting: 3\nI0723 12:21:50.633194    1404 log.go:172] (0xc00071e420) Reply frame received for 3\nI0723 12:21:50.633240    1404 log.go:172] (0xc00071e420) (0xc00073c6e0) Create stream\nI0723 12:21:50.633259    1404 log.go:172] (0xc00071e420) (0xc00073c6e0) Stream added, broadcasting: 5\nI0723 12:21:50.634268    1404 log.go:172] (0xc00071e420) Reply frame received for 5\nI0723 12:21:50.727006    1404 log.go:172] (0xc00071e420) Data frame received for 3\nI0723 12:21:50.727042    1404 log.go:172] (0xc0006b8c80) (3) Data frame handling\nI0723 12:21:50.727061    1404 log.go:172] (0xc0006b8c80) (3) Data frame sent\nI0723 12:21:50.727066    1404 log.go:172] (0xc00071e420) Data frame received for 3\nI0723 12:21:50.727071    1404 log.go:172] (0xc0006b8c80) (3) Data frame handling\nI0723 12:21:50.727320    1404 log.go:172] (0xc00071e420) Data frame received for 5\nI0723 12:21:50.727369    1404 log.go:172] (0xc00073c6e0) (5) Data frame handling\nI0723 12:21:50.729745    1404 log.go:172] (0xc00071e420) Data frame received for 1\nI0723 12:21:50.729785    1404 log.go:172] (0xc00073c640) (1) Data frame handling\nI0723 12:21:50.729833    1404 log.go:172] (0xc00073c640) (1) Data frame sent\nI0723 12:21:50.729865    1404 log.go:172] (0xc00071e420) (0xc00073c640) Stream removed, broadcasting: 1\nI0723 12:21:50.729902    1404 log.go:172] (0xc00071e420) Go away received\nI0723 12:21:50.730122    1404 log.go:172] (0xc00071e420) (0xc00073c640) Stream removed, broadcasting: 1\nI0723 12:21:50.730146    1404 log.go:172] (0xc00071e420) (0xc0006b8c80) Stream removed, broadcasting: 3\nI0723 12:21:50.730159    1404 log.go:172] (0xc00071e420) (0xc00073c6e0) Stream removed, broadcasting: 5\n"
Jul 23 12:21:50.735: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:21:50.735: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:21:50.761: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 23 12:22:00.766: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:22:00.766: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:22:00.783: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul 23 12:22:00.783: INFO: ss-0  hunter-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:00.783: INFO: 
Jul 23 12:22:00.783: INFO: StatefulSet ss has not reached scale 3, at 1
Jul 23 12:22:01.788: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994391985s
Jul 23 12:22:02.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989550476s
Jul 23 12:22:03.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.936364804s
Jul 23 12:22:04.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.932124813s
Jul 23 12:22:05.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.927243901s
Jul 23 12:22:06.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.922627226s
Jul 23 12:22:07.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.917337392s
Jul 23 12:22:08.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.912940767s
Jul 23 12:22:09.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 907.742921ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-j2c42
Jul 23 12:22:10.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:22:11.091: INFO: stderr: "I0723 12:22:11.010589    1426 log.go:172] (0xc0008542c0) (0xc0006f4640) Create stream\nI0723 12:22:11.010657    1426 log.go:172] (0xc0008542c0) (0xc0006f4640) Stream added, broadcasting: 1\nI0723 12:22:11.013495    1426 log.go:172] (0xc0008542c0) Reply frame received for 1\nI0723 12:22:11.013536    1426 log.go:172] (0xc0008542c0) (0xc0005fcbe0) Create stream\nI0723 12:22:11.013545    1426 log.go:172] (0xc0008542c0) (0xc0005fcbe0) Stream added, broadcasting: 3\nI0723 12:22:11.014828    1426 log.go:172] (0xc0008542c0) Reply frame received for 3\nI0723 12:22:11.014896    1426 log.go:172] (0xc0008542c0) (0xc000220000) Create stream\nI0723 12:22:11.014925    1426 log.go:172] (0xc0008542c0) (0xc000220000) Stream added, broadcasting: 5\nI0723 12:22:11.015960    1426 log.go:172] (0xc0008542c0) Reply frame received for 5\nI0723 12:22:11.084252    1426 log.go:172] (0xc0008542c0) Data frame received for 5\nI0723 12:22:11.084288    1426 log.go:172] (0xc000220000) (5) Data frame handling\nI0723 12:22:11.084314    1426 log.go:172] (0xc0008542c0) Data frame received for 3\nI0723 12:22:11.084348    1426 log.go:172] (0xc0005fcbe0) (3) Data frame handling\nI0723 12:22:11.084390    1426 log.go:172] (0xc0005fcbe0) (3) Data frame sent\nI0723 12:22:11.084400    1426 log.go:172] (0xc0008542c0) Data frame received for 3\nI0723 12:22:11.084411    1426 log.go:172] (0xc0005fcbe0) (3) Data frame handling\nI0723 12:22:11.086394    1426 log.go:172] (0xc0008542c0) Data frame received for 1\nI0723 12:22:11.086424    1426 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0723 12:22:11.086443    1426 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0723 12:22:11.086463    1426 log.go:172] (0xc0008542c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0723 12:22:11.086495    1426 log.go:172] (0xc0008542c0) Go away received\nI0723 12:22:11.086782    1426 log.go:172] (0xc0008542c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0723 12:22:11.086821    1426 log.go:172] (0xc0008542c0) (0xc0005fcbe0) Stream removed, broadcasting: 3\nI0723 12:22:11.086835    1426 log.go:172] (0xc0008542c0) (0xc000220000) Stream removed, broadcasting: 5\n"
Jul 23 12:22:11.091: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:22:11.091: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:22:11.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:22:11.321: INFO: stderr: "I0723 12:22:11.255565    1448 log.go:172] (0xc000138840) (0xc000720640) Create stream\nI0723 12:22:11.255628    1448 log.go:172] (0xc000138840) (0xc000720640) Stream added, broadcasting: 1\nI0723 12:22:11.258084    1448 log.go:172] (0xc000138840) Reply frame received for 1\nI0723 12:22:11.258123    1448 log.go:172] (0xc000138840) (0xc0005f4c80) Create stream\nI0723 12:22:11.258136    1448 log.go:172] (0xc000138840) (0xc0005f4c80) Stream added, broadcasting: 3\nI0723 12:22:11.258903    1448 log.go:172] (0xc000138840) Reply frame received for 3\nI0723 12:22:11.258940    1448 log.go:172] (0xc000138840) (0xc0007de000) Create stream\nI0723 12:22:11.258952    1448 log.go:172] (0xc000138840) (0xc0007de000) Stream added, broadcasting: 5\nI0723 12:22:11.259772    1448 log.go:172] (0xc000138840) Reply frame received for 5\nI0723 12:22:11.313754    1448 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:22:11.313801    1448 log.go:172] (0xc0005f4c80) (3) Data frame handling\nI0723 12:22:11.313824    1448 log.go:172] (0xc0005f4c80) (3) Data frame sent\nI0723 12:22:11.313838    1448 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:22:11.313849    1448 log.go:172] (0xc0005f4c80) (3) Data frame handling\nI0723 12:22:11.313880    1448 log.go:172] (0xc000138840) Data frame received for 5\nI0723 12:22:11.313903    1448 log.go:172] (0xc0007de000) (5) Data frame handling\nI0723 12:22:11.313919    1448 log.go:172] (0xc0007de000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0723 12:22:11.314021    1448 log.go:172] (0xc000138840) Data frame received for 5\nI0723 12:22:11.314043    1448 log.go:172] (0xc0007de000) (5) Data frame handling\nI0723 12:22:11.315668    1448 log.go:172] (0xc000138840) Data frame received for 1\nI0723 12:22:11.315707    1448 log.go:172] (0xc000720640) (1) Data frame handling\nI0723 12:22:11.315734    1448 log.go:172] (0xc000720640) (1) Data frame sent\nI0723 12:22:11.315759    1448 log.go:172] (0xc000138840) (0xc000720640) Stream removed, broadcasting: 1\nI0723 12:22:11.315801    1448 log.go:172] (0xc000138840) Go away received\nI0723 12:22:11.316060    1448 log.go:172] (0xc000138840) (0xc000720640) Stream removed, broadcasting: 1\nI0723 12:22:11.316087    1448 log.go:172] (0xc000138840) (0xc0005f4c80) Stream removed, broadcasting: 3\nI0723 12:22:11.316101    1448 log.go:172] (0xc000138840) (0xc0007de000) Stream removed, broadcasting: 5\n"
Jul 23 12:22:11.321: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:22:11.321: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:22:11.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:22:11.519: INFO: stderr: "I0723 12:22:11.439413    1470 log.go:172] (0xc000162840) (0xc0007a0640) Create stream\nI0723 12:22:11.439463    1470 log.go:172] (0xc000162840) (0xc0007a0640) Stream added, broadcasting: 1\nI0723 12:22:11.441624    1470 log.go:172] (0xc000162840) Reply frame received for 1\nI0723 12:22:11.441667    1470 log.go:172] (0xc000162840) (0xc0005d0d20) Create stream\nI0723 12:22:11.441683    1470 log.go:172] (0xc000162840) (0xc0005d0d20) Stream added, broadcasting: 3\nI0723 12:22:11.442269    1470 log.go:172] (0xc000162840) Reply frame received for 3\nI0723 12:22:11.442291    1470 log.go:172] (0xc000162840) (0xc0005d0e60) Create stream\nI0723 12:22:11.442296    1470 log.go:172] (0xc000162840) (0xc0005d0e60) Stream added, broadcasting: 5\nI0723 12:22:11.442942    1470 log.go:172] (0xc000162840) Reply frame received for 5\nI0723 12:22:11.513407    1470 log.go:172] (0xc000162840) Data frame received for 5\nI0723 12:22:11.513437    1470 log.go:172] (0xc0005d0e60) (5) Data frame handling\nI0723 12:22:11.513445    1470 log.go:172] (0xc0005d0e60) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0723 12:22:11.513473    1470 log.go:172] (0xc000162840) Data frame received for 3\nI0723 12:22:11.513500    1470 log.go:172] (0xc0005d0d20) (3) Data frame handling\nI0723 12:22:11.513520    1470 log.go:172] (0xc0005d0d20) (3) Data frame sent\nI0723 12:22:11.513555    1470 log.go:172] (0xc000162840) Data frame received for 5\nI0723 12:22:11.513589    1470 log.go:172] (0xc0005d0e60) (5) Data frame handling\nI0723 12:22:11.513613    1470 log.go:172] (0xc000162840) Data frame received for 3\nI0723 12:22:11.513627    1470 log.go:172] (0xc0005d0d20) (3) Data frame handling\nI0723 12:22:11.515585    1470 log.go:172] (0xc000162840) Data frame received for 1\nI0723 12:22:11.515607    1470 log.go:172] (0xc0007a0640) (1) Data frame handling\nI0723 12:22:11.515630    1470 log.go:172] (0xc0007a0640) (1) Data frame sent\nI0723 12:22:11.515652    1470 log.go:172] (0xc000162840) (0xc0007a0640) Stream removed, broadcasting: 1\nI0723 12:22:11.515690    1470 log.go:172] (0xc000162840) Go away received\nI0723 12:22:11.515839    1470 log.go:172] (0xc000162840) (0xc0007a0640) Stream removed, broadcasting: 1\nI0723 12:22:11.515853    1470 log.go:172] (0xc000162840) (0xc0005d0d20) Stream removed, broadcasting: 3\nI0723 12:22:11.515862    1470 log.go:172] (0xc000162840) (0xc0005d0e60) Stream removed, broadcasting: 5\n"
Jul 23 12:22:11.519: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:22:11.519: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:22:11.524: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul 23 12:22:21.528: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:22:21.528: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:22:21.528: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul 23 12:22:21.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:22:21.761: INFO: stderr: "I0723 12:22:21.667171    1493 log.go:172] (0xc00015c790) (0xc00056b4a0) Create stream\nI0723 12:22:21.667236    1493 log.go:172] (0xc00015c790) (0xc00056b4a0) Stream added, broadcasting: 1\nI0723 12:22:21.670416    1493 log.go:172] (0xc00015c790) Reply frame received for 1\nI0723 12:22:21.670458    1493 log.go:172] (0xc00015c790) (0xc0003c0000) Create stream\nI0723 12:22:21.670471    1493 log.go:172] (0xc00015c790) (0xc0003c0000) Stream added, broadcasting: 3\nI0723 12:22:21.671474    1493 log.go:172] (0xc00015c790) Reply frame received for 3\nI0723 12:22:21.671516    1493 log.go:172] (0xc00015c790) (0xc0003c00a0) Create stream\nI0723 12:22:21.671533    1493 log.go:172] (0xc00015c790) (0xc0003c00a0) Stream added, broadcasting: 5\nI0723 12:22:21.672820    1493 log.go:172] (0xc00015c790) Reply frame received for 5\nI0723 12:22:21.755302    1493 log.go:172] (0xc00015c790) Data frame received for 3\nI0723 12:22:21.755349    1493 log.go:172] (0xc0003c0000) (3) Data frame handling\nI0723 12:22:21.755369    1493 log.go:172] (0xc0003c0000) (3) Data frame sent\nI0723 12:22:21.755384    1493 log.go:172] (0xc00015c790) Data frame received for 3\nI0723 12:22:21.755397    1493 log.go:172] (0xc0003c0000) (3) Data frame handling\nI0723 12:22:21.755447    1493 log.go:172] (0xc00015c790) Data frame received for 5\nI0723 12:22:21.755485    1493 log.go:172] (0xc0003c00a0) (5) Data frame handling\nI0723 12:22:21.757440    1493 log.go:172] (0xc00015c790) Data frame received for 1\nI0723 12:22:21.757460    1493 log.go:172] (0xc00056b4a0) (1) Data frame handling\nI0723 12:22:21.757479    1493 log.go:172] (0xc00056b4a0) (1) Data frame sent\nI0723 12:22:21.757497    1493 log.go:172] (0xc00015c790) (0xc00056b4a0) Stream removed, broadcasting: 1\nI0723 12:22:21.757642    1493 log.go:172] (0xc00015c790) (0xc00056b4a0) Stream removed, broadcasting: 1\nI0723 12:22:21.757662    1493 log.go:172] (0xc00015c790) (0xc0003c0000) Stream removed, broadcasting: 3\nI0723 12:22:21.757693    1493 log.go:172] (0xc00015c790) Go away received\nI0723 12:22:21.757746    1493 log.go:172] (0xc00015c790) (0xc0003c00a0) Stream removed, broadcasting: 5\n"
Jul 23 12:22:21.761: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:22:21.761: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:22:21.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:22:21.998: INFO: stderr: "I0723 12:22:21.891853    1517 log.go:172] (0xc000764160) (0xc0006a6780) Create stream\nI0723 12:22:21.891914    1517 log.go:172] (0xc000764160) (0xc0006a6780) Stream added, broadcasting: 1\nI0723 12:22:21.894843    1517 log.go:172] (0xc000764160) Reply frame received for 1\nI0723 12:22:21.894904    1517 log.go:172] (0xc000764160) (0xc000592d20) Create stream\nI0723 12:22:21.894931    1517 log.go:172] (0xc000764160) (0xc000592d20) Stream added, broadcasting: 3\nI0723 12:22:21.895828    1517 log.go:172] (0xc000764160) Reply frame received for 3\nI0723 12:22:21.895874    1517 log.go:172] (0xc000764160) (0xc00081c000) Create stream\nI0723 12:22:21.895889    1517 log.go:172] (0xc000764160) (0xc00081c000) Stream added, broadcasting: 5\nI0723 12:22:21.896602    1517 log.go:172] (0xc000764160) Reply frame received for 5\nI0723 12:22:21.992044    1517 log.go:172] (0xc000764160) Data frame received for 5\nI0723 12:22:21.992107    1517 log.go:172] (0xc00081c000) (5) Data frame handling\nI0723 12:22:21.992148    1517 log.go:172] (0xc000764160) Data frame received for 3\nI0723 12:22:21.992187    1517 log.go:172] (0xc000592d20) (3) Data frame handling\nI0723 12:22:21.992229    1517 log.go:172] (0xc000592d20) (3) Data frame sent\nI0723 12:22:21.992536    1517 log.go:172] (0xc000764160) Data frame received for 3\nI0723 12:22:21.992568    1517 log.go:172] (0xc000592d20) (3) Data frame handling\nI0723 12:22:21.993998    1517 log.go:172] (0xc000764160) Data frame received for 1\nI0723 12:22:21.994016    1517 log.go:172] (0xc0006a6780) (1) Data frame handling\nI0723 12:22:21.994026    1517 log.go:172] (0xc0006a6780) (1) Data frame sent\nI0723 12:22:21.994037    1517 log.go:172] (0xc000764160) (0xc0006a6780) Stream removed, broadcasting: 1\nI0723 12:22:21.994053    1517 log.go:172] (0xc000764160) Go away received\nI0723 12:22:21.994372    1517 log.go:172] (0xc000764160) (0xc0006a6780) Stream removed, broadcasting: 1\nI0723 12:22:21.994394    1517 log.go:172] (0xc000764160) (0xc000592d20) Stream removed, broadcasting: 3\nI0723 12:22:21.994408    1517 log.go:172] (0xc000764160) (0xc00081c000) Stream removed, broadcasting: 5\n"
Jul 23 12:22:21.998: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:22:21.998: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:22:21.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:22:22.257: INFO: stderr: "I0723 12:22:22.150749    1540 log.go:172] (0xc00083a2c0) (0xc0006fe640) Create stream\nI0723 12:22:22.150828    1540 log.go:172] (0xc00083a2c0) (0xc0006fe640) Stream added, broadcasting: 1\nI0723 12:22:22.153406    1540 log.go:172] (0xc00083a2c0) Reply frame received for 1\nI0723 12:22:22.153448    1540 log.go:172] (0xc00083a2c0) (0xc0006fe6e0) Create stream\nI0723 12:22:22.153458    1540 log.go:172] (0xc00083a2c0) (0xc0006fe6e0) Stream added, broadcasting: 3\nI0723 12:22:22.154623    1540 log.go:172] (0xc00083a2c0) Reply frame received for 3\nI0723 12:22:22.154666    1540 log.go:172] (0xc00083a2c0) (0xc0005dadc0) Create stream\nI0723 12:22:22.154680    1540 log.go:172] (0xc00083a2c0) (0xc0005dadc0) Stream added, broadcasting: 5\nI0723 12:22:22.155797    1540 log.go:172] (0xc00083a2c0) Reply frame received for 5\nI0723 12:22:22.250315    1540 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0723 12:22:22.250343    1540 log.go:172] (0xc0006fe6e0) (3) Data frame handling\nI0723 12:22:22.250359    1540 log.go:172] (0xc0006fe6e0) (3) Data frame sent\nI0723 12:22:22.250484    1540 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0723 12:22:22.250498    1540 log.go:172] (0xc0006fe6e0) (3) Data frame handling\nI0723 12:22:22.250622    1540 log.go:172] (0xc00083a2c0) Data frame received for 5\nI0723 12:22:22.250644    1540 log.go:172] (0xc0005dadc0) (5) Data frame handling\nI0723 12:22:22.252060    1540 log.go:172] (0xc00083a2c0) Data frame received for 1\nI0723 12:22:22.252078    1540 log.go:172] (0xc0006fe640) (1) Data frame handling\nI0723 12:22:22.252089    1540 log.go:172] (0xc0006fe640) (1) Data frame sent\nI0723 12:22:22.252105    1540 log.go:172] (0xc00083a2c0) (0xc0006fe640) Stream removed, broadcasting: 1\nI0723 12:22:22.252136    1540 log.go:172] (0xc00083a2c0) Go away received\nI0723 12:22:22.252271    1540 log.go:172] (0xc00083a2c0) (0xc0006fe640) Stream removed, broadcasting: 1\nI0723 12:22:22.252282    1540 log.go:172] (0xc00083a2c0) (0xc0006fe6e0) Stream removed, broadcasting: 3\nI0723 12:22:22.252288    1540 log.go:172] (0xc00083a2c0) (0xc0005dadc0) Stream removed, broadcasting: 5\n"
Jul 23 12:22:22.257: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:22:22.257: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:22:22.257: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:22:22.260: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul 23 12:22:32.268: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:22:32.268: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:22:32.268: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:22:32.297: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:32.297: INFO: ss-0  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:32.297: INFO: ss-1  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:32.297: INFO: ss-2  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:32.297: INFO: 
Jul 23 12:22:32.297: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 23 12:22:33.458: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:33.458: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:33.458: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:33.458: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:33.458: INFO: 
Jul 23 12:22:33.458: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 23 12:22:34.462: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:34.462: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:34.462: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:34.462: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:34.462: INFO: 
Jul 23 12:22:34.462: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 23 12:22:35.480: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:35.480: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:35.480: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:35.481: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:35.481: INFO: 
Jul 23 12:22:35.481: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 23 12:22:36.487: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:36.487: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:36.487: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:36.487: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:36.487: INFO: 
Jul 23 12:22:36.487: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 23 12:22:37.501: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:37.501: INFO: ss-0  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:21:40 +0000 UTC  }]
Jul 23 12:22:37.501: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:37.501: INFO: 
Jul 23 12:22:37.501: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 23 12:22:38.506: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:38.506: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:38.507: INFO: 
Jul 23 12:22:38.507: INFO: StatefulSet ss has not reached scale 0, at 1
Jul 23 12:22:39.546: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:39.546: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:39.546: INFO: 
Jul 23 12:22:39.546: INFO: StatefulSet ss has not reached scale 0, at 1
Jul 23 12:22:40.551: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:40.551: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:40.551: INFO: 
Jul 23 12:22:40.551: INFO: StatefulSet ss has not reached scale 0, at 1
Jul 23 12:22:41.564: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 23 12:22:41.564: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:22:00 +0000 UTC  }]
Jul 23 12:22:41.564: INFO: 
Jul 23 12:22:41.564: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-j2c42
Jul 23 12:22:42.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:22:42.692: INFO: rc: 1
Jul 23 12:22:42.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0015d0900 exit status 1   true [0xc0004b2e08 0xc0004b2e78 0xc0004b2eb0] [0xc0004b2e08 0xc0004b2e78 0xc0004b2eb0] [0xc0004b2e70 0xc0004b2e90] [0x935700 0x935700] 0xc0023c8600 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jul 23 12:22:52.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:22:52.788: INFO: rc: 1
Jul 23 12:22:52.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002574480 exit status 1   true [0xc0005806a0 0xc0005808b0 0xc000580a50] [0xc0005806a0 0xc0005808b0 0xc000580a50] [0xc000580890 0xc0005809a8] [0x935700 0x935700] 0xc0022fe300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:02.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:02.889: INFO: rc: 1
Jul 23 12:23:02.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0ab0 exit status 1   true [0xc0004b2ee8 0xc0004b2f38 0xc0004b2f88] [0xc0004b2ee8 0xc0004b2f38 0xc0004b2f88] [0xc0004b2f10 0xc0004b2f70] [0x935700 0x935700] 0xc0023c89c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:12.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:12.978: INFO: rc: 1
Jul 23 12:23:12.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc120 exit status 1   true [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4010 0xc000cf4028] [0x935700 0x935700] 0xc0022ee240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:22.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:23.065: INFO: rc: 1
Jul 23 12:23:23.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0cc0 exit status 1   true [0xc0004b2fb0 0xc0004b3010 0xc0004b3060] [0xc0004b2fb0 0xc0004b3010 0xc0004b3060] [0xc0004b3008 0xc0004b3050] [0x935700 0x935700] 0xc0023c8e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:33.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:33.161: INFO: rc: 1
Jul 23 12:23:33.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0de0 exit status 1   true [0xc0004b3068 0xc0004b30e8 0xc0004b3148] [0xc0004b3068 0xc0004b30e8 0xc0004b3148] [0xc0004b30d8 0xc0004b3128] [0x935700 0x935700] 0xc0023c9320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:43.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:43.259: INFO: rc: 1
Jul 23 12:23:43.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc270 exit status 1   true [0xc000cf4038 0xc000cf4050 0xc000cf4068] [0xc000cf4038 0xc000cf4050 0xc000cf4068] [0xc000cf4048 0xc000cf4060] [0x935700 0x935700] 0xc0022ee840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:23:53.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:23:53.343: INFO: rc: 1
Jul 23 12:23:53.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d1110 exit status 1   true [0xc0004b3160 0xc0004b3180 0xc0004b31b8] [0xc0004b3160 0xc0004b3180 0xc0004b31b8] [0xc0004b3178 0xc0004b31a0] [0x935700 0x935700] 0xc0023c95c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:03.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:03.443: INFO: rc: 1
Jul 23 12:24:03.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d1230 exit status 1   true [0xc0004b31c0 0xc0004b3278 0xc0004b32d0] [0xc0004b31c0 0xc0004b3278 0xc0004b32d0] [0xc0004b3238 0xc0004b3298] [0x935700 0x935700] 0xc0023c98c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:13.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:13.528: INFO: rc: 1
Jul 23 12:24:13.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc390 exit status 1   true [0xc000cf4070 0xc000cf4088 0xc000cf40a0] [0xc000cf4070 0xc000cf4088 0xc000cf40a0] [0xc000cf4080 0xc000cf4098] [0x935700 0x935700] 0xc0022eeae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:23.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:23.615: INFO: rc: 1
Jul 23 12:24:23.615: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002574630 exit status 1   true [0xc000580be8 0xc000580e70 0xc000580f78] [0xc000580be8 0xc000580e70 0xc000580f78] [0xc000580c98 0xc000580f38] [0x935700 0x935700] 0xc0022fe5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:33.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:33.719: INFO: rc: 1
Jul 23 12:24:33.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00230e420 exit status 1   true [0xc001a80000 0xc001a80020 0xc001a80038] [0xc001a80000 0xc001a80020 0xc001a80038] [0xc001a80018 0xc001a80030] [0x935700 0x935700] 0xc001da87e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:43.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:43.814: INFO: rc: 1
Jul 23 12:24:43.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc4b0 exit status 1   true [0xc000cf40b0 0xc000cf40c8 0xc000cf40e0] [0xc000cf40b0 0xc000cf40c8 0xc000cf40e0] [0xc000cf40c0 0xc000cf40d8] [0x935700 0x935700] 0xc0022eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:24:53.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:24:53.912: INFO: rc: 1
Jul 23 12:24:53.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc0f0 exit status 1   true [0xc00000e1f8 0xc001a80008 0xc001a80028] [0xc00000e1f8 0xc001a80008 0xc001a80028] [0xc001a80000 0xc001a80020] [0x935700 0x935700] 0xc001da83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:03.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:04.004: INFO: rc: 1
Jul 23 12:25:04.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc240 exit status 1   true [0xc001a80030 0xc001a80048 0xc001a80060] [0xc001a80030 0xc001a80048 0xc001a80060] [0xc001a80040 0xc001a80058] [0x935700 0x935700] 0xc001da8a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:14.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:14.100: INFO: rc: 1
Jul 23 12:25:14.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc4e0 exit status 1   true [0xc001a80068 0xc001a80080 0xc001a80098] [0xc001a80068 0xc001a80080 0xc001a80098] [0xc001a80078 0xc001a80090] [0x935700 0x935700] 0xc001da96e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:24.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:24.230: INFO: rc: 1
Jul 23 12:25:24.230: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc630 exit status 1   true [0xc001a800a0 0xc001a800b8 0xc001a800d0] [0xc001a800a0 0xc001a800b8 0xc001a800d0] [0xc001a800b0 0xc001a800c8] [0x935700 0x935700] 0xc001da99e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:34.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:34.324: INFO: rc: 1
Jul 23 12:25:34.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00230e150 exit status 1   true [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4010 0xc000cf4028] [0x935700 0x935700] 0xc0022ee240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:44.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:44.417: INFO: rc: 1
Jul 23 12:25:44.417: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002574120 exit status 1   true [0xc000580048 0xc000580600 0xc0005807d8] [0xc000580048 0xc000580600 0xc0005807d8] [0xc0005803e0 0xc0005806a0] [0x935700 0x935700] 0xc0022fe1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:25:54.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:25:54.509: INFO: rc: 1
Jul 23 12:25:54.510: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0120 exit status 1   true [0xc0004b2bf0 0xc0004b2c48 0xc0004b2db8] [0xc0004b2bf0 0xc0004b2c48 0xc0004b2db8] [0xc0004b2c00 0xc0004b2d38] [0x935700 0x935700] 0xc0023c8360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:04.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:04.591: INFO: rc: 1
Jul 23 12:26:04.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00230e270 exit status 1   true [0xc000cf4038 0xc000cf4050 0xc000cf4068] [0xc000cf4038 0xc000cf4050 0xc000cf4068] [0xc000cf4048 0xc000cf4060] [0x935700 0x935700] 0xc0022ee840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:14.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:14.684: INFO: rc: 1
Jul 23 12:26:14.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0270 exit status 1   true [0xc0004b2de8 0xc0004b2e70 0xc0004b2e90] [0xc0004b2de8 0xc0004b2e70 0xc0004b2e90] [0xc0004b2e30 0xc0004b2e88] [0x935700 0x935700] 0xc0023c8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:24.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:24.769: INFO: rc: 1
Jul 23 12:26:24.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc7b0 exit status 1   true [0xc001a800d8 0xc001a800f0 0xc001a80108] [0xc001a800d8 0xc001a800f0 0xc001a80108] [0xc001a800e8 0xc001a80100] [0x935700 0x935700] 0xc001da9c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:34.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:34.861: INFO: rc: 1
Jul 23 12:26:34.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00230e3f0 exit status 1   true [0xc000cf4070 0xc000cf4088 0xc000cf40a0] [0xc000cf4070 0xc000cf4088 0xc000cf40a0] [0xc000cf4080 0xc000cf4098] [0x935700 0x935700] 0xc0022eeae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:44.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:44.969: INFO: rc: 1
Jul 23 12:26:44.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015d0150 exit status 1   true [0xc00000e1f8 0xc0004b2c00 0xc0004b2d38] [0xc00000e1f8 0xc0004b2c00 0xc0004b2d38] [0xc0004b2bf8 0xc0004b2d18] [0x935700 0x935700] 0xc001da83c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:26:54.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:26:55.068: INFO: rc: 1
Jul 23 12:26:55.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc150 exit status 1   true [0xc001a80000 0xc001a80020 0xc001a80038] [0xc001a80000 0xc001a80020 0xc001a80038] [0xc001a80018 0xc001a80030] [0x935700 0x935700] 0xc0023c8360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:27:05.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:27:05.170: INFO: rc: 1
Jul 23 12:27:05.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc2d0 exit status 1   true [0xc001a80040 0xc001a80058 0xc001a80070] [0xc001a80040 0xc001a80058 0xc001a80070] [0xc001a80050 0xc001a80068] [0x935700 0x935700] 0xc0023c8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:27:15.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:27:15.265: INFO: rc: 1
Jul 23 12:27:15.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc3f0 exit status 1   true [0xc001a80078 0xc001a80090 0xc001a800a8] [0xc001a80078 0xc001a80090 0xc001a800a8] [0xc001a80088 0xc001a800a0] [0x935700 0x935700] 0xc0023c8ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:27:25.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:27:25.355: INFO: rc: 1
Jul 23 12:27:25.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002574150 exit status 1   true [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4000 0xc000cf4018 0xc000cf4030] [0xc000cf4010 0xc000cf4028] [0x935700 0x935700] 0xc0022ee240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:27:35.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:27:35.452: INFO: rc: 1
Jul 23 12:27:35.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001ccc5a0 exit status 1   true [0xc001a800b0 0xc001a800c8 0xc001a800e0] [0xc001a800b0 0xc001a800c8 0xc001a800e0] [0xc001a800c0 0xc001a800d8] [0x935700 0x935700] 0xc0023c8f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul 23 12:27:45.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-j2c42 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:27:45.539: INFO: rc: 1
Jul 23 12:27:45.539: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jul 23 12:27:45.539: INFO: Scaling statefulset ss to 0
Jul 23 12:27:45.546: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 23 12:27:45.548: INFO: Deleting all statefulset in ns e2e-tests-statefulset-j2c42
Jul 23 12:27:45.550: INFO: Scaling statefulset ss to 0
Jul 23 12:27:45.556: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:27:45.558: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:27:45.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-j2c42" for this suite.
Jul 23 12:27:51.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:27:51.675: INFO: namespace: e2e-tests-statefulset-j2c42, resource: bindings, ignored listing per whitelist
Jul 23 12:27:51.683: INFO: namespace e2e-tests-statefulset-j2c42 deletion completed in 6.110217644s

• [SLOW TEST:371.361 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:27:51.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0723 12:28:22.330497       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 23 12:28:22.330: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:28:22.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ztspl" for this suite.
Jul 23 12:28:28.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:28:28.405: INFO: namespace: e2e-tests-gc-ztspl, resource: bindings, ignored listing per whitelist
Jul 23 12:28:28.410: INFO: namespace e2e-tests-gc-ztspl deletion completed in 6.076366555s

• [SLOW TEST:36.726 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:28:28.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-03260c7c-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:28:28.601: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-vxmj9" to be "success or failure"
Jul 23 12:28:28.689: INFO: Pod "pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 88.679062ms
Jul 23 12:28:30.693: INFO: Pod "pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092333833s
Jul 23 12:28:32.696: INFO: Pod "pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095380602s
STEP: Saw pod success
Jul 23 12:28:32.696: INFO: Pod "pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:28:32.699: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 23 12:28:32.736: INFO: Waiting for pod pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:28:32.742: INFO: Pod pod-projected-configmaps-03282e7e-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:28:32.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vxmj9" for this suite.
Jul 23 12:28:38.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:28:38.841: INFO: namespace: e2e-tests-projected-vxmj9, resource: bindings, ignored listing per whitelist
Jul 23 12:28:38.871: INFO: namespace e2e-tests-projected-vxmj9 deletion completed in 6.122294299s

• [SLOW TEST:10.461 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:28:38.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:29:01.020: INFO: Container started at 2020-07-23 12:28:41 +0000 UTC, pod became ready at 2020-07-23 12:29:00 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:29:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lj56h" for this suite.
Jul 23 12:29:23.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:29:23.105: INFO: namespace: e2e-tests-container-probe-lj56h, resource: bindings, ignored listing per whitelist
Jul 23 12:29:23.120: INFO: namespace e2e-tests-container-probe-lj56h deletion completed in 22.092847352s

• [SLOW TEST:44.249 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:29:23.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:29:23.646: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 23 12:29:23.654: INFO: Number of nodes with available pods: 0
Jul 23 12:29:23.654: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 23 12:29:23.756: INFO: Number of nodes with available pods: 0
Jul 23 12:29:23.756: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:24.761: INFO: Number of nodes with available pods: 0
Jul 23 12:29:24.761: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:25.761: INFO: Number of nodes with available pods: 0
Jul 23 12:29:25.761: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:26.766: INFO: Number of nodes with available pods: 1
Jul 23 12:29:26.766: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 23 12:29:26.838: INFO: Number of nodes with available pods: 1
Jul 23 12:29:26.838: INFO: Number of running nodes: 0, number of available pods: 1
Jul 23 12:29:27.844: INFO: Number of nodes with available pods: 0
Jul 23 12:29:27.844: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 23 12:29:27.863: INFO: Number of nodes with available pods: 0
Jul 23 12:29:27.863: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:28.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:28.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:29.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:29.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:30.882: INFO: Number of nodes with available pods: 0
Jul 23 12:29:30.882: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:31.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:31.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:32.867: INFO: Number of nodes with available pods: 0
Jul 23 12:29:32.867: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:33.867: INFO: Number of nodes with available pods: 0
Jul 23 12:29:33.867: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:34.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:34.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:35.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:35.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:36.868: INFO: Number of nodes with available pods: 0
Jul 23 12:29:36.868: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:37.867: INFO: Number of nodes with available pods: 0
Jul 23 12:29:37.867: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:38.867: INFO: Number of nodes with available pods: 0
Jul 23 12:29:38.867: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:39.867: INFO: Number of nodes with available pods: 0
Jul 23 12:29:39.867: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:29:40.867: INFO: Number of nodes with available pods: 1
Jul 23 12:29:40.867: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jjjfx, will wait for the garbage collector to delete the pods
Jul 23 12:29:40.986: INFO: Deleting DaemonSet.extensions daemon-set took: 61.188302ms
Jul 23 12:29:41.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.26975ms
Jul 23 12:29:47.691: INFO: Number of nodes with available pods: 0
Jul 23 12:29:47.691: INFO: Number of running nodes: 0, number of available pods: 0
Jul 23 12:29:47.694: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jjjfx/daemonsets","resourceVersion":"2365443"},"items":null}

Jul 23 12:29:47.697: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jjjfx/pods","resourceVersion":"2365443"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:29:47.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-jjjfx" for this suite.
Jul 23 12:29:53.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:29:53.814: INFO: namespace: e2e-tests-daemonsets-jjjfx, resource: bindings, ignored listing per whitelist
Jul 23 12:29:53.827: INFO: namespace e2e-tests-daemonsets-jjjfx deletion completed in 6.086367968s

• [SLOW TEST:30.706 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:29:53.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jul 23 12:29:53.906: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jul 23 12:29:53.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:58.167: INFO: stderr: ""
Jul 23 12:29:58.167: INFO: stdout: "service/redis-slave created\n"
Jul 23 12:29:58.167: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jul 23 12:29:58.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:58.443: INFO: stderr: ""
Jul 23 12:29:58.443: INFO: stdout: "service/redis-master created\n"
Jul 23 12:29:58.444: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 23 12:29:58.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:58.762: INFO: stderr: ""
Jul 23 12:29:58.762: INFO: stdout: "service/frontend created\n"
Jul 23 12:29:58.763: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jul 23 12:29:58.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:59.038: INFO: stderr: ""
Jul 23 12:29:59.038: INFO: stdout: "deployment.extensions/frontend created\n"
Jul 23 12:29:59.039: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 23 12:29:59.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:59.419: INFO: stderr: ""
Jul 23 12:29:59.419: INFO: stdout: "deployment.extensions/redis-master created\n"
Jul 23 12:29:59.419: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jul 23 12:29:59.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:29:59.713: INFO: stderr: ""
Jul 23 12:29:59.713: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul 23 12:29:59.713: INFO: Waiting for all frontend pods to be Running.
Jul 23 12:30:09.763: INFO: Waiting for frontend to serve content.
Jul 23 12:30:09.779: INFO: Trying to add a new entry to the guestbook.
Jul 23 12:30:09.812: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 23 12:30:09.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:10.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:10.054: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 23 12:30:10.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:10.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:10.218: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 23 12:30:10.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:10.361: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:10.361: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 23 12:30:10.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:10.462: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:10.462: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 23 12:30:10.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:10.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:10.583: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 23 12:30:10.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8w2n2'
Jul 23 12:30:11.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:30:11.110: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:30:11.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8w2n2" for this suite.
Jul 23 12:30:51.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:30:51.224: INFO: namespace: e2e-tests-kubectl-8w2n2, resource: bindings, ignored listing per whitelist
Jul 23 12:30:51.231: INFO: namespace e2e-tests-kubectl-8w2n2 deletion completed in 40.117393432s

• [SLOW TEST:57.404 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:30:51.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:30:51.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-lh5pk" to be "success or failure"
Jul 23 12:30:51.390: INFO: Pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.071246ms
Jul 23 12:30:53.395: INFO: Pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02353693s
Jul 23 12:30:56.999: INFO: Pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 5.62806111s
Jul 23 12:30:59.004: INFO: Pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.632533242s
STEP: Saw pod success
Jul 23 12:30:59.004: INFO: Pod "downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:30:59.007: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:30:59.203: INFO: Waiting for pod downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:30:59.245: INFO: Pod downwardapi-volume-584021a3-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:30:59.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lh5pk" for this suite.
Jul 23 12:31:05.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:31:05.311: INFO: namespace: e2e-tests-downward-api-lh5pk, resource: bindings, ignored listing per whitelist
Jul 23 12:31:05.424: INFO: namespace e2e-tests-downward-api-lh5pk deletion completed in 6.175090485s

• [SLOW TEST:14.193 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:31:05.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:31:09.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-v7cnr" for this suite.
Jul 23 12:31:15.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:31:15.740: INFO: namespace: e2e-tests-kubelet-test-v7cnr, resource: bindings, ignored listing per whitelist
Jul 23 12:31:15.740: INFO: namespace e2e-tests-kubelet-test-v7cnr deletion completed in 6.102362857s

• [SLOW TEST:10.315 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:31:15.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0723 12:32:01.330373       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 23 12:32:01.330: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:32:01.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m5hlm" for this suite.
Jul 23 12:32:13.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:32:13.368: INFO: namespace: e2e-tests-gc-m5hlm, resource: bindings, ignored listing per whitelist
Jul 23 12:32:13.423: INFO: namespace e2e-tests-gc-m5hlm deletion completed in 12.089885488s

• [SLOW TEST:57.683 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:32:13.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:32:13.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-lp9t2" to be "success or failure"
Jul 23 12:32:13.516: INFO: Pod "downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659668ms
Jul 23 12:32:15.520: INFO: Pod "downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011051071s
Jul 23 12:32:17.525: INFO: Pod "downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015510242s
STEP: Saw pod success
Jul 23 12:32:17.525: INFO: Pod "downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:32:17.528: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:32:17.561: INFO: Waiting for pod downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:32:17.604: INFO: Pod downwardapi-volume-8936c5ed-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:32:17.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lp9t2" for this suite.
Jul 23 12:32:23.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:32:23.649: INFO: namespace: e2e-tests-projected-lp9t2, resource: bindings, ignored listing per whitelist
Jul 23 12:32:23.699: INFO: namespace e2e-tests-projected-lp9t2 deletion completed in 6.090711325s

• [SLOW TEST:10.275 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:32:23.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-5qwf4
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5qwf4 to expose endpoints map[]
Jul 23 12:32:25.149: INFO: Get endpoints failed (646.693446ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul 23 12:32:26.153: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5qwf4 exposes endpoints map[] (1.650818669s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-5qwf4
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5qwf4 to expose endpoints map[pod1:[80]]
Jul 23 12:32:30.226: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5qwf4 exposes endpoints map[pod1:[80]] (4.065849303s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-5qwf4
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5qwf4 to expose endpoints map[pod1:[80] pod2:[80]]
Jul 23 12:32:33.400: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5qwf4 exposes endpoints map[pod1:[80] pod2:[80]] (3.169065529s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-5qwf4
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5qwf4 to expose endpoints map[pod2:[80]]
Jul 23 12:32:34.475: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5qwf4 exposes endpoints map[pod2:[80]] (1.071534496s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-5qwf4
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-5qwf4 to expose endpoints map[]
Jul 23 12:32:35.652: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-5qwf4 exposes endpoints map[] (1.173035725s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:32:35.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-5qwf4" for this suite.
Jul 23 12:32:57.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:32:57.844: INFO: namespace: e2e-tests-services-5qwf4, resource: bindings, ignored listing per whitelist
Jul 23 12:32:57.858: INFO: namespace e2e-tests-services-5qwf4 deletion completed in 22.10718141s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:34.159 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:32:57.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a3b52b84-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:32:57.970: INFO: Waiting up to 5m0s for pod "pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-vhvqc" to be "success or failure"
Jul 23 12:32:57.975: INFO: Pod "pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409997ms
Jul 23 12:33:00.174: INFO: Pod "pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203608685s
Jul 23 12:33:02.178: INFO: Pod "pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207829743s
STEP: Saw pod success
Jul 23 12:33:02.178: INFO: Pod "pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:33:02.181: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 23 12:33:02.242: INFO: Waiting for pod pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:33:02.257: INFO: Pod pod-secrets-a3b6d56c-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:33:02.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vhvqc" for this suite.
Jul 23 12:33:08.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:33:08.305: INFO: namespace: e2e-tests-secrets-vhvqc, resource: bindings, ignored listing per whitelist
Jul 23 12:33:08.355: INFO: namespace e2e-tests-secrets-vhvqc deletion completed in 6.094627534s

• [SLOW TEST:10.497 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:33:08.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a9f54639-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:33:08.456: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-dt6kn" to be "success or failure"
Jul 23 12:33:08.473: INFO: Pod "pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.709587ms
Jul 23 12:33:10.477: INFO: Pod "pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020768793s
Jul 23 12:33:12.481: INFO: Pod "pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025032713s
STEP: Saw pod success
Jul 23 12:33:12.482: INFO: Pod "pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:33:12.485: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 23 12:33:12.501: INFO: Waiting for pod pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:33:12.520: INFO: Pod pod-projected-configmaps-a9f6d75f-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:33:12.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dt6kn" for this suite.
Jul 23 12:33:18.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:33:18.557: INFO: namespace: e2e-tests-projected-dt6kn, resource: bindings, ignored listing per whitelist
Jul 23 12:33:18.618: INFO: namespace e2e-tests-projected-dt6kn deletion completed in 6.094337036s

• [SLOW TEST:10.263 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:33:18.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 23 12:33:18.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:18.827: INFO: Number of nodes with available pods: 0
Jul 23 12:33:18.827: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:33:19.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:19.836: INFO: Number of nodes with available pods: 0
Jul 23 12:33:19.836: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:33:20.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:20.835: INFO: Number of nodes with available pods: 0
Jul 23 12:33:20.835: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:33:21.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:21.926: INFO: Number of nodes with available pods: 0
Jul 23 12:33:21.926: INFO: Node hunter-worker is running more than one daemon pod
Jul 23 12:33:22.840: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:22.842: INFO: Number of nodes with available pods: 1
Jul 23 12:33:22.842: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:23.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:23.834: INFO: Number of nodes with available pods: 2
Jul 23 12:33:23.835: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 23 12:33:23.851: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:23.854: INFO: Number of nodes with available pods: 1
Jul 23 12:33:23.854: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:24.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:24.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:24.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:25.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:25.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:25.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:26.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:26.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:26.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:27.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:27.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:27.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:28.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:28.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:28.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:29.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:29.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:29.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:30.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:30.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:30.864: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:31.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:31.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:31.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:32.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:32.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:32.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:33.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:33.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:33.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:34.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:34.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:34.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:35.858: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:35.861: INFO: Number of nodes with available pods: 1
Jul 23 12:33:35.861: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:36.859: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:36.863: INFO: Number of nodes with available pods: 1
Jul 23 12:33:36.863: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:37.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:37.868: INFO: Number of nodes with available pods: 1
Jul 23 12:33:37.868: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:38.966: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:38.969: INFO: Number of nodes with available pods: 1
Jul 23 12:33:38.969: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:39.870: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:39.873: INFO: Number of nodes with available pods: 1
Jul 23 12:33:39.873: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 23 12:33:40.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 23 12:33:40.864: INFO: Number of nodes with available pods: 2
Jul 23 12:33:40.864: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pcpvh, will wait for the garbage collector to delete the pods
Jul 23 12:33:40.927: INFO: Deleting DaemonSet.extensions daemon-set took: 6.749898ms
Jul 23 12:33:41.027: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.275091ms
Jul 23 12:33:47.653: INFO: Number of nodes with available pods: 0
Jul 23 12:33:47.653: INFO: Number of running nodes: 0, number of available pods: 0
Jul 23 12:33:47.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pcpvh/daemonsets","resourceVersion":"2366516"},"items":null}

Jul 23 12:33:47.658: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pcpvh/pods","resourceVersion":"2366516"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:33:47.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-pcpvh" for this suite.
Jul 23 12:33:53.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:33:53.741: INFO: namespace: e2e-tests-daemonsets-pcpvh, resource: bindings, ignored listing per whitelist
Jul 23 12:33:53.765: INFO: namespace e2e-tests-daemonsets-pcpvh deletion completed in 6.094189909s

• [SLOW TEST:35.146 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:33:53.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 23 12:33:53.960: INFO: Waiting up to 5m0s for pod "pod-c512f00f-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-xxhjl" to be "success or failure"
Jul 23 12:33:53.993: INFO: Pod "pod-c512f00f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.257499ms
Jul 23 12:33:56.007: INFO: Pod "pod-c512f00f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046681781s
Jul 23 12:33:58.011: INFO: Pod "pod-c512f00f-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050600843s
STEP: Saw pod success
Jul 23 12:33:58.011: INFO: Pod "pod-c512f00f-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:33:58.014: INFO: Trying to get logs from node hunter-worker pod pod-c512f00f-cce0-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:33:58.104: INFO: Waiting for pod pod-c512f00f-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:33:58.115: INFO: Pod pod-c512f00f-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:33:58.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xxhjl" for this suite.
Jul 23 12:34:04.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:34:04.184: INFO: namespace: e2e-tests-emptydir-xxhjl, resource: bindings, ignored listing per whitelist
Jul 23 12:34:04.202: INFO: namespace e2e-tests-emptydir-xxhjl deletion completed in 6.083786812s

• [SLOW TEST:10.437 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:34:04.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 23 12:34:12.411: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 23 12:34:12.430: INFO: Pod pod-with-prestop-http-hook still exists
Jul 23 12:34:14.431: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 23 12:34:14.435: INFO: Pod pod-with-prestop-http-hook still exists
Jul 23 12:34:16.431: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 23 12:34:16.435: INFO: Pod pod-with-prestop-http-hook still exists
Jul 23 12:34:18.431: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 23 12:34:18.434: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:34:18.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-n4ghd" for this suite.
Jul 23 12:34:40.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:34:40.710: INFO: namespace: e2e-tests-container-lifecycle-hook-n4ghd, resource: bindings, ignored listing per whitelist
Jul 23 12:34:40.944: INFO: namespace e2e-tests-container-lifecycle-hook-n4ghd deletion completed in 22.500744997s

• [SLOW TEST:36.741 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:34:40.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-e129cec6-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:34:41.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-b2d59" to be "success or failure"
Jul 23 12:34:41.083: INFO: Pod "pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348261ms
Jul 23 12:34:43.087: INFO: Pod "pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008726964s
Jul 23 12:34:45.092: INFO: Pod "pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013085369s
STEP: Saw pod success
Jul 23 12:34:45.092: INFO: Pod "pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:34:45.095: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 23 12:34:45.141: INFO: Waiting for pod pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:34:45.155: INFO: Pod pod-projected-secrets-e12bf76f-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:34:45.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b2d59" for this suite.
Jul 23 12:34:51.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:34:51.219: INFO: namespace: e2e-tests-projected-b2d59, resource: bindings, ignored listing per whitelist
Jul 23 12:34:51.245: INFO: namespace e2e-tests-projected-b2d59 deletion completed in 6.087044952s

• [SLOW TEST:10.301 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:34:51.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-5dr9f/secret-test-e74784be-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:34:51.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-5dr9f" to be "success or failure"
Jul 23 12:34:51.373: INFO: Pod "pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.539303ms
Jul 23 12:34:53.377: INFO: Pod "pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049150585s
Jul 23 12:34:55.380: INFO: Pod "pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052865246s
STEP: Saw pod success
Jul 23 12:34:55.380: INFO: Pod "pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:34:55.383: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b container env-test: 
STEP: delete the pod
Jul 23 12:34:55.407: INFO: Waiting for pod pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:34:55.410: INFO: Pod pod-configmaps-e747fc15-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:34:55.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5dr9f" for this suite.
Jul 23 12:35:01.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:35:01.436: INFO: namespace: e2e-tests-secrets-5dr9f, resource: bindings, ignored listing per whitelist
Jul 23 12:35:01.527: INFO: namespace e2e-tests-secrets-5dr9f deletion completed in 6.113056646s

• [SLOW TEST:10.282 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:35:01.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b
Jul 23 12:35:01.698: INFO: Pod name my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b: Found 0 pods out of 1
Jul 23 12:35:06.702: INFO: Pod name my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b: Found 1 pods out of 1
Jul 23 12:35:06.702: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b" are running
Jul 23 12:35:06.706: INFO: Pod "my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b-zds66" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:35:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:35:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:35:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-23 12:35:01 +0000 UTC Reason: Message:}])
Jul 23 12:35:06.706: INFO: Trying to dial the pod
Jul 23 12:35:11.717: INFO: Controller my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b-zds66]: "my-hostname-basic-ed6f8236-cce0-11ea-92a5-0242ac11000b-zds66", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:35:11.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-l4nqj" for this suite.
Jul 23 12:35:17.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:35:17.805: INFO: namespace: e2e-tests-replication-controller-l4nqj, resource: bindings, ignored listing per whitelist
Jul 23 12:35:17.873: INFO: namespace e2e-tests-replication-controller-l4nqj deletion completed in 6.152083487s

• [SLOW TEST:16.345 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:35:17.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f7306c0e-cce0-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 23 12:35:18.107: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-msrvw" to be "success or failure"
Jul 23 12:35:18.141: INFO: Pod "pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.548076ms
Jul 23 12:35:20.146: INFO: Pod "pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038918269s
Jul 23 12:35:22.149: INFO: Pod "pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042532604s
STEP: Saw pod success
Jul 23 12:35:22.149: INFO: Pod "pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:35:22.152: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 23 12:35:22.166: INFO: Waiting for pod pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b to disappear
Jul 23 12:35:22.212: INFO: Pod pod-projected-configmaps-f7362f07-cce0-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:35:22.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-msrvw" for this suite.
Jul 23 12:35:28.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:35:28.283: INFO: namespace: e2e-tests-projected-msrvw, resource: bindings, ignored listing per whitelist
Jul 23 12:35:28.299: INFO: namespace e2e-tests-projected-msrvw deletion completed in 6.083750782s

• [SLOW TEST:10.425 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:35:28.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-949ql
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-949ql
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-949ql
Jul 23 12:35:28.553: INFO: Found 0 stateful pods, waiting for 1
Jul 23 12:35:38.557: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 23 12:35:38.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:35:38.846: INFO: stderr: "I0723 12:35:38.697753    2508 log.go:172] (0xc000138790) (0xc000754640) Create stream\nI0723 12:35:38.697831    2508 log.go:172] (0xc000138790) (0xc000754640) Stream added, broadcasting: 1\nI0723 12:35:38.700137    2508 log.go:172] (0xc000138790) Reply frame received for 1\nI0723 12:35:38.700182    2508 log.go:172] (0xc000138790) (0xc000632dc0) Create stream\nI0723 12:35:38.700202    2508 log.go:172] (0xc000138790) (0xc000632dc0) Stream added, broadcasting: 3\nI0723 12:35:38.701191    2508 log.go:172] (0xc000138790) Reply frame received for 3\nI0723 12:35:38.701249    2508 log.go:172] (0xc000138790) (0xc0007546e0) Create stream\nI0723 12:35:38.701265    2508 log.go:172] (0xc000138790) (0xc0007546e0) Stream added, broadcasting: 5\nI0723 12:35:38.702149    2508 log.go:172] (0xc000138790) Reply frame received for 5\nI0723 12:35:38.839269    2508 log.go:172] (0xc000138790) Data frame received for 3\nI0723 12:35:38.839318    2508 log.go:172] (0xc000632dc0) (3) Data frame handling\nI0723 12:35:38.839348    2508 log.go:172] (0xc000632dc0) (3) Data frame sent\nI0723 12:35:38.839467    2508 log.go:172] (0xc000138790) Data frame received for 5\nI0723 12:35:38.839502    2508 log.go:172] (0xc0007546e0) (5) Data frame handling\nI0723 12:35:38.839533    2508 log.go:172] (0xc000138790) Data frame received for 3\nI0723 12:35:38.839548    2508 log.go:172] (0xc000632dc0) (3) Data frame handling\nI0723 12:35:38.841475    2508 log.go:172] (0xc000138790) Data frame received for 1\nI0723 12:35:38.841508    2508 log.go:172] (0xc000754640) (1) Data frame handling\nI0723 12:35:38.841528    2508 log.go:172] (0xc000754640) (1) Data frame sent\nI0723 12:35:38.841549    2508 log.go:172] (0xc000138790) (0xc000754640) Stream removed, broadcasting: 1\nI0723 12:35:38.841602    2508 log.go:172] (0xc000138790) Go away received\nI0723 12:35:38.841766    2508 log.go:172] (0xc000138790) (0xc000754640) Stream removed, broadcasting: 1\nI0723 12:35:38.841795    2508 log.go:172] (0xc000138790) (0xc000632dc0) Stream removed, broadcasting: 3\nI0723 12:35:38.841813    2508 log.go:172] (0xc000138790) (0xc0007546e0) Stream removed, broadcasting: 5\n"
Jul 23 12:35:38.846: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:35:38.846: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:35:38.850: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 23 12:35:48.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:35:48.854: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:35:48.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999428s
Jul 23 12:35:49.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.912937748s
Jul 23 12:35:50.958: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.908864254s
Jul 23 12:35:52.039: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.904362365s
Jul 23 12:35:53.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.822820165s
Jul 23 12:35:54.049: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.818058392s
Jul 23 12:35:55.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.813279962s
Jul 23 12:35:56.057: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.808912239s
Jul 23 12:35:57.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.805278282s
Jul 23 12:35:58.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 800.813522ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-949ql
Jul 23 12:35:59.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:35:59.293: INFO: stderr: "I0723 12:35:59.200236    2531 log.go:172] (0xc000138630) (0xc00070a640) Create stream\nI0723 12:35:59.200287    2531 log.go:172] (0xc000138630) (0xc00070a640) Stream added, broadcasting: 1\nI0723 12:35:59.202267    2531 log.go:172] (0xc000138630) Reply frame received for 1\nI0723 12:35:59.202295    2531 log.go:172] (0xc000138630) (0xc000620be0) Create stream\nI0723 12:35:59.202302    2531 log.go:172] (0xc000138630) (0xc000620be0) Stream added, broadcasting: 3\nI0723 12:35:59.203076    2531 log.go:172] (0xc000138630) Reply frame received for 3\nI0723 12:35:59.203112    2531 log.go:172] (0xc000138630) (0xc00070a6e0) Create stream\nI0723 12:35:59.203127    2531 log.go:172] (0xc000138630) (0xc00070a6e0) Stream added, broadcasting: 5\nI0723 12:35:59.203894    2531 log.go:172] (0xc000138630) Reply frame received for 5\nI0723 12:35:59.286269    2531 log.go:172] (0xc000138630) Data frame received for 5\nI0723 12:35:59.286316    2531 log.go:172] (0xc00070a6e0) (5) Data frame handling\nI0723 12:35:59.286351    2531 log.go:172] (0xc000138630) Data frame received for 3\nI0723 12:35:59.286362    2531 log.go:172] (0xc000620be0) (3) Data frame handling\nI0723 12:35:59.286370    2531 log.go:172] (0xc000620be0) (3) Data frame sent\nI0723 12:35:59.286376    2531 log.go:172] (0xc000138630) Data frame received for 3\nI0723 12:35:59.286381    2531 log.go:172] (0xc000620be0) (3) Data frame handling\nI0723 12:35:59.287910    2531 log.go:172] (0xc000138630) Data frame received for 1\nI0723 12:35:59.287947    2531 log.go:172] (0xc00070a640) (1) Data frame handling\nI0723 12:35:59.287977    2531 log.go:172] (0xc00070a640) (1) Data frame sent\nI0723 12:35:59.288119    2531 log.go:172] (0xc000138630) (0xc00070a640) Stream removed, broadcasting: 1\nI0723 12:35:59.288151    2531 log.go:172] (0xc000138630) Go away received\nI0723 12:35:59.288344    2531 log.go:172] (0xc000138630) (0xc00070a640) Stream removed, broadcasting: 1\nI0723 12:35:59.288370    2531 log.go:172] (0xc000138630) (0xc000620be0) Stream removed, broadcasting: 3\nI0723 12:35:59.288383    2531 log.go:172] (0xc000138630) (0xc00070a6e0) Stream removed, broadcasting: 5\n"
Jul 23 12:35:59.293: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:35:59.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:35:59.297: INFO: Found 1 stateful pods, waiting for 3
Jul 23 12:36:09.301: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:36:09.301: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 23 12:36:09.301: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 23 12:36:09.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:36:09.514: INFO: stderr: "I0723 12:36:09.429723    2554 log.go:172] (0xc0008722c0) (0xc000569400) Create stream\nI0723 12:36:09.429789    2554 log.go:172] (0xc0008722c0) (0xc000569400) Stream added, broadcasting: 1\nI0723 12:36:09.432305    2554 log.go:172] (0xc0008722c0) Reply frame received for 1\nI0723 12:36:09.432365    2554 log.go:172] (0xc0008722c0) (0xc0004da000) Create stream\nI0723 12:36:09.432383    2554 log.go:172] (0xc0008722c0) (0xc0004da000) Stream added, broadcasting: 3\nI0723 12:36:09.433560    2554 log.go:172] (0xc0008722c0) Reply frame received for 3\nI0723 12:36:09.433615    2554 log.go:172] (0xc0008722c0) (0xc0002e6000) Create stream\nI0723 12:36:09.433629    2554 log.go:172] (0xc0008722c0) (0xc0002e6000) Stream added, broadcasting: 5\nI0723 12:36:09.434646    2554 log.go:172] (0xc0008722c0) Reply frame received for 5\nI0723 12:36:09.510137    2554 log.go:172] (0xc0008722c0) Data frame received for 5\nI0723 12:36:09.510169    2554 log.go:172] (0xc0002e6000) (5) Data frame handling\nI0723 12:36:09.510207    2554 log.go:172] (0xc0008722c0) Data frame received for 3\nI0723 12:36:09.510250    2554 log.go:172] (0xc0004da000) (3) Data frame handling\nI0723 12:36:09.510273    2554 log.go:172] (0xc0004da000) (3) Data frame sent\nI0723 12:36:09.510280    2554 log.go:172] (0xc0008722c0) Data frame received for 3\nI0723 12:36:09.510286    2554 log.go:172] (0xc0004da000) (3) Data frame handling\nI0723 12:36:09.511602    2554 log.go:172] (0xc0008722c0) Data frame received for 1\nI0723 12:36:09.511616    2554 log.go:172] (0xc000569400) (1) Data frame handling\nI0723 12:36:09.511621    2554 log.go:172] (0xc000569400) (1) Data frame sent\nI0723 12:36:09.511628    2554 log.go:172] (0xc0008722c0) (0xc000569400) Stream removed, broadcasting: 1\nI0723 12:36:09.511675    2554 log.go:172] (0xc0008722c0) Go away received\nI0723 12:36:09.511755    2554 log.go:172] (0xc0008722c0) (0xc000569400) Stream removed, broadcasting: 1\nI0723 12:36:09.511767    2554 log.go:172] (0xc0008722c0) (0xc0004da000) Stream removed, broadcasting: 3\nI0723 12:36:09.511772    2554 log.go:172] (0xc0008722c0) (0xc0002e6000) Stream removed, broadcasting: 5\n"
Jul 23 12:36:09.515: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:36:09.515: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:36:09.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:36:09.789: INFO: stderr: "I0723 12:36:09.655805    2577 log.go:172] (0xc000710370) (0xc0006454a0) Create stream\nI0723 12:36:09.655881    2577 log.go:172] (0xc000710370) (0xc0006454a0) Stream added, broadcasting: 1\nI0723 12:36:09.658373    2577 log.go:172] (0xc000710370) Reply frame received for 1\nI0723 12:36:09.658426    2577 log.go:172] (0xc000710370) (0xc0005f0000) Create stream\nI0723 12:36:09.658439    2577 log.go:172] (0xc000710370) (0xc0005f0000) Stream added, broadcasting: 3\nI0723 12:36:09.659630    2577 log.go:172] (0xc000710370) Reply frame received for 3\nI0723 12:36:09.659670    2577 log.go:172] (0xc000710370) (0xc000645540) Create stream\nI0723 12:36:09.659686    2577 log.go:172] (0xc000710370) (0xc000645540) Stream added, broadcasting: 5\nI0723 12:36:09.660875    2577 log.go:172] (0xc000710370) Reply frame received for 5\nI0723 12:36:09.782024    2577 log.go:172] (0xc000710370) Data frame received for 3\nI0723 12:36:09.782076    2577 log.go:172] (0xc0005f0000) (3) Data frame handling\nI0723 12:36:09.782121    2577 log.go:172] (0xc0005f0000) (3) Data frame sent\nI0723 12:36:09.782240    2577 log.go:172] (0xc000710370) Data frame received for 5\nI0723 12:36:09.782272    2577 log.go:172] (0xc000710370) Data frame received for 3\nI0723 12:36:09.782347    2577 log.go:172] (0xc000645540) (5) Data frame handling\nI0723 12:36:09.782425    2577 log.go:172] (0xc0005f0000) (3) Data frame handling\nI0723 12:36:09.784089    2577 log.go:172] (0xc000710370) Data frame received for 1\nI0723 12:36:09.784170    2577 log.go:172] (0xc0006454a0) (1) Data frame handling\nI0723 12:36:09.784207    2577 log.go:172] (0xc0006454a0) (1) Data frame sent\nI0723 12:36:09.784228    2577 log.go:172] (0xc000710370) (0xc0006454a0) Stream removed, broadcasting: 1\nI0723 12:36:09.784248    2577 log.go:172] (0xc000710370) Go away received\nI0723 12:36:09.784507    2577 log.go:172] (0xc000710370) (0xc0006454a0) Stream removed, broadcasting: 1\nI0723 12:36:09.784539    2577 log.go:172] (0xc000710370) (0xc0005f0000) Stream removed, broadcasting: 3\nI0723 12:36:09.784559    2577 log.go:172] (0xc000710370) (0xc000645540) Stream removed, broadcasting: 5\n"
Jul 23 12:36:09.789: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:36:09.789: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:36:09.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 23 12:36:10.029: INFO: stderr: "I0723 12:36:09.917049    2598 log.go:172] (0xc000138160) (0xc0006ee280) Create stream\nI0723 12:36:09.917115    2598 log.go:172] (0xc000138160) (0xc0006ee280) Stream added, broadcasting: 1\nI0723 12:36:09.919977    2598 log.go:172] (0xc000138160) Reply frame received for 1\nI0723 12:36:09.920049    2598 log.go:172] (0xc000138160) (0xc0003d0c80) Create stream\nI0723 12:36:09.920066    2598 log.go:172] (0xc000138160) (0xc0003d0c80) Stream added, broadcasting: 3\nI0723 12:36:09.921412    2598 log.go:172] (0xc000138160) Reply frame received for 3\nI0723 12:36:09.921448    2598 log.go:172] (0xc000138160) (0xc0003d0dc0) Create stream\nI0723 12:36:09.921458    2598 log.go:172] (0xc000138160) (0xc0003d0dc0) Stream added, broadcasting: 5\nI0723 12:36:09.922495    2598 log.go:172] (0xc000138160) Reply frame received for 5\nI0723 12:36:10.022748    2598 log.go:172] (0xc000138160) Data frame received for 3\nI0723 12:36:10.022790    2598 log.go:172] (0xc0003d0c80) (3) Data frame handling\nI0723 12:36:10.022833    2598 log.go:172] (0xc0003d0c80) (3) Data frame sent\nI0723 12:36:10.022855    2598 log.go:172] (0xc000138160) Data frame received for 3\nI0723 12:36:10.022871    2598 log.go:172] (0xc0003d0c80) (3) Data frame handling\nI0723 12:36:10.023370    2598 log.go:172] (0xc000138160) Data frame received for 5\nI0723 12:36:10.023403    2598 log.go:172] (0xc0003d0dc0) (5) Data frame handling\nI0723 12:36:10.024459    2598 log.go:172] (0xc000138160) Data frame received for 1\nI0723 12:36:10.024505    2598 log.go:172] (0xc0006ee280) (1) Data frame handling\nI0723 12:36:10.024539    2598 log.go:172] (0xc0006ee280) (1) Data frame sent\nI0723 12:36:10.024569    2598 log.go:172] (0xc000138160) (0xc0006ee280) Stream removed, broadcasting: 1\nI0723 12:36:10.024607    2598 log.go:172] (0xc000138160) Go away received\nI0723 12:36:10.024993    2598 log.go:172] (0xc000138160) (0xc0006ee280) Stream removed, broadcasting: 1\nI0723 12:36:10.025022    2598 log.go:172] (0xc000138160) (0xc0003d0c80) Stream removed, broadcasting: 3\nI0723 12:36:10.025036    2598 log.go:172] (0xc000138160) (0xc0003d0dc0) Stream removed, broadcasting: 5\n"
Jul 23 12:36:10.029: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 23 12:36:10.029: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 23 12:36:10.029: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:36:10.032: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 23 12:36:20.041: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:36:20.041: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:36:20.041: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 23 12:36:20.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999695s
Jul 23 12:36:21.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993525771s
Jul 23 12:36:22.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988175114s
Jul 23 12:36:23.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982697546s
Jul 23 12:36:24.075: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977362231s
Jul 23 12:36:25.080: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972088923s
Jul 23 12:36:26.085: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967599252s
Jul 23 12:36:27.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962349291s
Jul 23 12:36:28.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956648792s
Jul 23 12:36:29.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.810947ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-949ql
Jul 23 12:36:30.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:36:30.296: INFO: stderr: "I0723 12:36:30.234384    2621 log.go:172] (0xc00014c840) (0xc000740640) Create stream\nI0723 12:36:30.234434    2621 log.go:172] (0xc00014c840) (0xc000740640) Stream added, broadcasting: 1\nI0723 12:36:30.238494    2621 log.go:172] (0xc00014c840) Reply frame received for 1\nI0723 12:36:30.238623    2621 log.go:172] (0xc00014c840) (0xc0005ecdc0) Create stream\nI0723 12:36:30.238706    2621 log.go:172] (0xc00014c840) (0xc0005ecdc0) Stream added, broadcasting: 3\nI0723 12:36:30.241380    2621 log.go:172] (0xc00014c840) Reply frame received for 3\nI0723 12:36:30.241418    2621 log.go:172] (0xc00014c840) (0xc0005ecf00) Create stream\nI0723 12:36:30.241434    2621 log.go:172] (0xc00014c840) (0xc0005ecf00) Stream added, broadcasting: 5\nI0723 12:36:30.243018    2621 log.go:172] (0xc00014c840) Reply frame received for 5\nI0723 12:36:30.288557    2621 log.go:172] (0xc00014c840) Data frame received for 5\nI0723 12:36:30.288597    2621 log.go:172] (0xc0005ecf00) (5) Data frame handling\nI0723 12:36:30.288620    2621 log.go:172] (0xc00014c840) Data frame received for 3\nI0723 12:36:30.288629    2621 log.go:172] (0xc0005ecdc0) (3) Data frame handling\nI0723 12:36:30.288640    2621 log.go:172] (0xc0005ecdc0) (3) Data frame sent\nI0723 12:36:30.288649    2621 log.go:172] (0xc00014c840) Data frame received for 3\nI0723 12:36:30.288657    2621 log.go:172] (0xc0005ecdc0) (3) Data frame handling\nI0723 12:36:30.290733    2621 log.go:172] (0xc00014c840) Data frame received for 1\nI0723 12:36:30.290769    2621 log.go:172] (0xc000740640) (1) Data frame handling\nI0723 12:36:30.290780    2621 log.go:172] (0xc000740640) (1) Data frame sent\nI0723 12:36:30.290791    2621 log.go:172] (0xc00014c840) (0xc000740640) Stream removed, broadcasting: 1\nI0723 12:36:30.290946    2621 log.go:172] (0xc00014c840) (0xc000740640) Stream removed, broadcasting: 1\nI0723 12:36:30.290966    2621 log.go:172] (0xc00014c840) (0xc0005ecdc0) Stream removed, broadcasting: 3\nI0723 12:36:30.290975    2621 log.go:172] (0xc00014c840) (0xc0005ecf00) Stream removed, broadcasting: 5\n"
Jul 23 12:36:30.296: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:36:30.296: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:36:30.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:36:30.492: INFO: stderr: "I0723 12:36:30.422527    2643 log.go:172] (0xc000138840) (0xc000527400) Create stream\nI0723 12:36:30.422587    2643 log.go:172] (0xc000138840) (0xc000527400) Stream added, broadcasting: 1\nI0723 12:36:30.425468    2643 log.go:172] (0xc000138840) Reply frame received for 1\nI0723 12:36:30.425519    2643 log.go:172] (0xc000138840) (0xc0005274a0) Create stream\nI0723 12:36:30.425532    2643 log.go:172] (0xc000138840) (0xc0005274a0) Stream added, broadcasting: 3\nI0723 12:36:30.426456    2643 log.go:172] (0xc000138840) Reply frame received for 3\nI0723 12:36:30.426508    2643 log.go:172] (0xc000138840) (0xc000527540) Create stream\nI0723 12:36:30.426533    2643 log.go:172] (0xc000138840) (0xc000527540) Stream added, broadcasting: 5\nI0723 12:36:30.427440    2643 log.go:172] (0xc000138840) Reply frame received for 5\nI0723 12:36:30.485844    2643 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:36:30.485891    2643 log.go:172] (0xc0005274a0) (3) Data frame handling\nI0723 12:36:30.485918    2643 log.go:172] (0xc0005274a0) (3) Data frame sent\nI0723 12:36:30.485936    2643 log.go:172] (0xc000138840) Data frame received for 3\nI0723 12:36:30.485950    2643 log.go:172] (0xc0005274a0) (3) Data frame handling\nI0723 12:36:30.485986    2643 log.go:172] (0xc000138840) Data frame received for 5\nI0723 12:36:30.486056    2643 log.go:172] (0xc000527540) (5) Data frame handling\nI0723 12:36:30.487548    2643 log.go:172] (0xc000138840) Data frame received for 1\nI0723 12:36:30.487580    2643 log.go:172] (0xc000527400) (1) Data frame handling\nI0723 12:36:30.487638    2643 log.go:172] (0xc000527400) (1) Data frame sent\nI0723 12:36:30.487682    2643 log.go:172] (0xc000138840) (0xc000527400) Stream removed, broadcasting: 1\nI0723 12:36:30.487724    2643 log.go:172] (0xc000138840) Go away received\nI0723 12:36:30.487970    2643 log.go:172] (0xc000138840) (0xc000527400) Stream removed, broadcasting: 1\nI0723 12:36:30.487994    2643 log.go:172] (0xc000138840) (0xc0005274a0) Stream removed, broadcasting: 3\nI0723 12:36:30.488008    2643 log.go:172] (0xc000138840) (0xc000527540) Stream removed, broadcasting: 5\n"
Jul 23 12:36:30.492: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 23 12:36:30.492: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 23 12:36:30.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:36:30.676: INFO: rc: 1
Jul 23 12:36:30.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0723 12:36:30.613501    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Create stream
I0723 12:36:30.613559    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream added, broadcasting: 1
I0723 12:36:30.615602    2666 log.go:172] (0xc0008102c0) Reply frame received for 1
I0723 12:36:30.615632    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Create stream
I0723 12:36:30.615648    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream added, broadcasting: 3
I0723 12:36:30.616353    2666 log.go:172] (0xc0008102c0) Reply frame received for 3
I0723 12:36:30.616400    2666 log.go:172] (0xc0008102c0) (0xc000010000) Create stream
I0723 12:36:30.616414    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream added, broadcasting: 5
I0723 12:36:30.617204    2666 log.go:172] (0xc0008102c0) Reply frame received for 5
I0723 12:36:30.670962    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream removed, broadcasting: 3
I0723 12:36:30.671049    2666 log.go:172] (0xc0008102c0) Data frame received for 1
I0723 12:36:30.671099    2666 log.go:172] (0xc0007146e0) (1) Data frame handling
I0723 12:36:30.671125    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream removed, broadcasting: 5
I0723 12:36:30.671195    2666 log.go:172] (0xc0007146e0) (1) Data frame sent
I0723 12:36:30.671239    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream removed, broadcasting: 1
I0723 12:36:30.671260    2666 log.go:172] (0xc0008102c0) Go away received
I0723 12:36:30.671707    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream removed, broadcasting: 1
I0723 12:36:30.671739    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream removed, broadcasting: 3
I0723 12:36:30.671756    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a0e1d730a5b1b0e459ccafcf18b0d80b175da966e2deddd0d9e1d20ef20988ad": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown
 []  0xc001d72180 exit status 1   true [0xc000580048 0xc000580600 0xc0005807d8] [0xc000580048 0xc000580600 0xc0005807d8] [0xc0005803e0 0xc0005806a0] [0x935700 0x935700] 0xc0017ee240 }:
Command stdout:

stderr:
I0723 12:36:30.613501    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Create stream
I0723 12:36:30.613559    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream added, broadcasting: 1
I0723 12:36:30.615602    2666 log.go:172] (0xc0008102c0) Reply frame received for 1
I0723 12:36:30.615632    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Create stream
I0723 12:36:30.615648    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream added, broadcasting: 3
I0723 12:36:30.616353    2666 log.go:172] (0xc0008102c0) Reply frame received for 3
I0723 12:36:30.616400    2666 log.go:172] (0xc0008102c0) (0xc000010000) Create stream
I0723 12:36:30.616414    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream added, broadcasting: 5
I0723 12:36:30.617204    2666 log.go:172] (0xc0008102c0) Reply frame received for 5
I0723 12:36:30.670962    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream removed, broadcasting: 3
I0723 12:36:30.671049    2666 log.go:172] (0xc0008102c0) Data frame received for 1
I0723 12:36:30.671099    2666 log.go:172] (0xc0007146e0) (1) Data frame handling
I0723 12:36:30.671125    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream removed, broadcasting: 5
I0723 12:36:30.671195    2666 log.go:172] (0xc0007146e0) (1) Data frame sent
I0723 12:36:30.671239    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream removed, broadcasting: 1
I0723 12:36:30.671260    2666 log.go:172] (0xc0008102c0) Go away received
I0723 12:36:30.671707    2666 log.go:172] (0xc0008102c0) (0xc0007146e0) Stream removed, broadcasting: 1
I0723 12:36:30.671739    2666 log.go:172] (0xc0008102c0) (0xc0005bac80) Stream removed, broadcasting: 3
I0723 12:36:30.671756    2666 log.go:172] (0xc0008102c0) (0xc000010000) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a0e1d730a5b1b0e459ccafcf18b0d80b175da966e2deddd0d9e1d20ef20988ad": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown

error:
exit status 1

Jul 23 12:36:40.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:36:40.813: INFO: rc: 1
Jul 23 12:36:40.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026ccbd0 exit status 1   true [0xc001a1a578 0xc001a1a5b0 0xc001a1a608] [0xc001a1a578 0xc001a1a5b0 0xc001a1a608] [0xc001a1a5a0 0xc001a1a5f8] [0x935700 0x935700] 0xc0020890e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:36:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:36:50.922: INFO: rc: 1
Jul 23 12:36:50.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028cb380 exit status 1   true [0xc0004b3048 0xc0004b3068 0xc0004b30e8] [0xc0004b3048 0xc0004b3068 0xc0004b30e8] [0xc0004b3060 0xc0004b30d8] [0x935700 0x935700] 0xc0022fed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:00.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:01.011: INFO: rc: 1
Jul 23 12:37:01.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d722a0 exit status 1   true [0xc000580890 0xc0005809a8 0xc000580c30] [0xc000580890 0xc0005809a8 0xc000580c30] [0xc000580908 0xc000580be8] [0x935700 0x935700] 0xc0017ee720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:11.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:11.114: INFO: rc: 1
Jul 23 12:37:11.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023eb260 exit status 1   true [0xc0003fe9f8 0xc0003fea30 0xc0003fea58] [0xc0003fe9f8 0xc0003fea30 0xc0003fea58] [0xc0003fea20 0xc0003fea48] [0x935700 0x935700] 0xc001c60d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:21.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:21.201: INFO: rc: 1
Jul 23 12:37:21.201: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d723f0 exit status 1   true [0xc000580c98 0xc000580f38 0xc000580ff8] [0xc000580c98 0xc000580f38 0xc000580ff8] [0xc000580e88 0xc000580fe8] [0x935700 0x935700] 0xc0017eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:31.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:31.291: INFO: rc: 1
Jul 23 12:37:31.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028cb4d0 exit status 1   true [0xc0004b3108 0xc0004b3160 0xc0004b3180] [0xc0004b3108 0xc0004b3160 0xc0004b3180] [0xc0004b3148 0xc0004b3178] [0x935700 0x935700] 0xc0022ff020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:41.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:41.376: INFO: rc: 1
Jul 23 12:37:41.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028cb5f0 exit status 1   true [0xc0004b3198 0xc0004b31c0 0xc0004b3278] [0xc0004b3198 0xc0004b31c0 0xc0004b3278] [0xc0004b31b8 0xc0004b3238] [0x935700 0x935700] 0xc0022ff320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:37:51.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:37:51.473: INFO: rc: 1
Jul 23 12:37:51.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026ccd50 exit status 1   true [0xc001a1a628 0xc001a1a668 0xc001a1a6b8] [0xc001a1a628 0xc001a1a668 0xc001a1a6b8] [0xc001a1a658 0xc001a1a698] [0x935700 0x935700] 0xc002089440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:01.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:01.567: INFO: rc: 1
Jul 23 12:38:01.567: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0026ccfc0 exit status 1   true [0xc001a1a6c8 0xc001a1a718 0xc001a1a758] [0xc001a1a6c8 0xc001a1a718 0xc001a1a758] [0xc001a1a6f8 0xc001a1a748] [0x935700 0x935700] 0xc0020898c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:11.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:11.650: INFO: rc: 1
Jul 23 12:38:11.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d641e0 exit status 1   true [0xc00000e1f8 0xc0005803e0 0xc0005806a0] [0xc00000e1f8 0xc0005803e0 0xc0005806a0] [0xc0005802e0 0xc000580698] [0x935700 0x935700] 0xc001b3f5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:21.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:21.739: INFO: rc: 1
Jul 23 12:38:21.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d64450 exit status 1   true [0xc0005807d8 0xc000580908 0xc000580be8] [0xc0005807d8 0xc000580908 0xc000580be8] [0xc0005808b0 0xc000580a50] [0x935700 0x935700] 0xc001b3fc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:31.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:31.828: INFO: rc: 1
Jul 23 12:38:31.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002036240 exit status 1   true [0xc001a1a010 0xc001a1a050 0xc001a1a088] [0xc001a1a010 0xc001a1a050 0xc001a1a088] [0xc001a1a038 0xc001a1a078] [0x935700 0x935700] 0xc000eb8a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:41.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:41.917: INFO: rc: 1
Jul 23 12:38:41.917: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002036360 exit status 1   true [0xc001a1a098 0xc001a1a108 0xc001a1a150] [0xc001a1a098 0xc001a1a108 0xc001a1a150] [0xc001a1a0c0 0xc001a1a140] [0x935700 0x935700] 0xc000eb90e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:38:51.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:38:52.036: INFO: rc: 1
Jul 23 12:38:52.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4120 exit status 1   true [0xc0003fe038 0xc0003fe0e8 0xc0003fe308] [0xc0003fe038 0xc0003fe0e8 0xc0003fe308] [0xc0003fe0e0 0xc0003fe298] [0x935700 0x935700] 0xc0017ee240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:02.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:02.124: INFO: rc: 1
Jul 23 12:39:02.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4240 exit status 1   true [0xc0003fe310 0xc0003fe3f0 0xc0003fe518] [0xc0003fe310 0xc0003fe3f0 0xc0003fe518] [0xc0003fe3b8 0xc0003fe4d0] [0x935700 0x935700] 0xc0017ee720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:12.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:12.216: INFO: rc: 1
Jul 23 12:39:12.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4480 exit status 1   true [0xc0003fe578 0xc0003fe5f8 0xc0003fe688] [0xc0003fe578 0xc0003fe5f8 0xc0003fe688] [0xc0003fe5d0 0xc0003fe660] [0x935700 0x935700] 0xc0017eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:22.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:22.295: INFO: rc: 1
Jul 23 12:39:22.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d64600 exit status 1   true [0xc000580c30 0xc000580e88 0xc000580fe8] [0xc000580c30 0xc000580e88 0xc000580fe8] [0xc000580e70 0xc000580f78] [0x935700 0x935700] 0xc0020881e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:32.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:32.388: INFO: rc: 1
Jul 23 12:39:32.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d45d0 exit status 1   true [0xc0003fe690 0xc0003fe6d0 0xc0003fe7b8] [0xc0003fe690 0xc0003fe6d0 0xc0003fe7b8] [0xc0003fe6c0 0xc0003fe7a8] [0x935700 0x935700] 0xc0017ef020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:42.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:42.504: INFO: rc: 1
Jul 23 12:39:42.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d64780 exit status 1   true [0xc000580ff8 0xc000581148 0xc0005811a0] [0xc000580ff8 0xc000581148 0xc0005811a0] [0xc000581118 0xc000581198] [0x935700 0x935700] 0xc0020885a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:39:52.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:39:52.604: INFO: rc: 1
Jul 23 12:39:52.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d72150 exit status 1   true [0xc0004b2bf0 0xc0004b2c48 0xc0004b2db8] [0xc0004b2bf0 0xc0004b2c48 0xc0004b2db8] [0xc0004b2c00 0xc0004b2d38] [0x935700 0x935700] 0xc001c60420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:02.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:02.697: INFO: rc: 1
Jul 23 12:40:02.697: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4720 exit status 1   true [0xc0003fe7c0 0xc0003fe828 0xc0003fe8e8] [0xc0003fe7c0 0xc0003fe828 0xc0003fe8e8] [0xc0003fe7e8 0xc0003fe8d0] [0x935700 0x935700] 0xc0017ef320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:12.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:12.785: INFO: rc: 1
Jul 23 12:40:12.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d48a0 exit status 1   true [0xc0003fe908 0xc0003fe938 0xc0003fe9d8] [0xc0003fe908 0xc0003fe938 0xc0003fe9d8] [0xc0003fe930 0xc0003fe9b0] [0x935700 0x935700] 0xc0022fe000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:22.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:22.881: INFO: rc: 1
Jul 23 12:40:22.881: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4150 exit status 1   true [0xc00000e1f8 0xc0004b2bf8 0xc0004b2d18] [0xc00000e1f8 0xc0004b2bf8 0xc0004b2d18] [0xc0004b2bf0 0xc0004b2c48] [0x935700 0x935700] 0xc001b3f5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:32.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:32.972: INFO: rc: 1
Jul 23 12:40:32.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d72120 exit status 1   true [0xc0003fe038 0xc0003fe0e8 0xc0003fe308] [0xc0003fe038 0xc0003fe0e8 0xc0003fe308] [0xc0003fe0e0 0xc0003fe298] [0x935700 0x935700] 0xc0017ee240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:42.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:43.063: INFO: rc: 1
Jul 23 12:40:43.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d722a0 exit status 1   true [0xc0003fe310 0xc0003fe3f0 0xc0003fe518] [0xc0003fe310 0xc0003fe3f0 0xc0003fe518] [0xc0003fe3b8 0xc0003fe4d0] [0x935700 0x935700] 0xc0017ee720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:40:53.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:40:53.159: INFO: rc: 1
Jul 23 12:40:53.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d723f0 exit status 1   true [0xc0003fe578 0xc0003fe5f8 0xc0003fe688] [0xc0003fe578 0xc0003fe5f8 0xc0003fe688] [0xc0003fe5d0 0xc0003fe660] [0x935700 0x935700] 0xc0017eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:41:03.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:41:03.248: INFO: rc: 1
Jul 23 12:41:03.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d42d0 exit status 1   true [0xc0004b2d38 0xc0004b2e08 0xc0004b2e78] [0xc0004b2d38 0xc0004b2e08 0xc0004b2e78] [0xc0004b2de8 0xc0004b2e70] [0x935700 0x935700] 0xc001b3fc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:41:13.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:41:13.341: INFO: rc: 1
Jul 23 12:41:13.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001d72540 exit status 1   true [0xc0003fe690 0xc0003fe6d0 0xc0003fe7b8] [0xc0003fe690 0xc0003fe6d0 0xc0003fe7b8] [0xc0003fe6c0 0xc0003fe7a8] [0x935700 0x935700] 0xc0017ef020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:41:23.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:41:23.434: INFO: rc: 1
Jul 23 12:41:23.434: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029d4540 exit status 1   true [0xc0004b2e88 0xc0004b2ee8 0xc0004b2f38] [0xc0004b2e88 0xc0004b2ee8 0xc0004b2f38] [0xc0004b2eb0 0xc0004b2f10] [0x935700 0x935700] 0xc001c60420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Jul 23 12:41:33.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-949ql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 23 12:41:33.525: INFO: rc: 1
Jul 23 12:41:33.525: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jul 23 12:41:33.525: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 23 12:41:33.536: INFO: Deleting all statefulset in ns e2e-tests-statefulset-949ql
Jul 23 12:41:33.538: INFO: Scaling statefulset ss to 0
Jul 23 12:41:33.543: INFO: Waiting for statefulset status.replicas updated to 0
Jul 23 12:41:33.545: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:41:33.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-949ql" for this suite.
Jul 23 12:41:39.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:41:39.600: INFO: namespace: e2e-tests-statefulset-949ql, resource: bindings, ignored listing per whitelist
Jul 23 12:41:39.656: INFO: namespace e2e-tests-statefulset-949ql deletion completed in 6.09833878s

• [SLOW TEST:371.357 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:41:39.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:41:39.744: INFO: Creating deployment "nginx-deployment"
Jul 23 12:41:39.757: INFO: Waiting for observed generation 1
Jul 23 12:41:41.785: INFO: Waiting for all required pods to come up
Jul 23 12:41:41.791: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 23 12:41:51.800: INFO: Waiting for deployment "nginx-deployment" to complete
Jul 23 12:41:51.811: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul 23 12:41:51.815: INFO: Updating deployment nginx-deployment
Jul 23 12:41:51.815: INFO: Waiting for observed generation 2
Jul 23 12:41:53.845: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 23 12:41:53.916: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 23 12:41:53.919: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 23 12:41:53.928: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 23 12:41:53.928: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 23 12:41:53.931: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 23 12:41:53.935: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul 23 12:41:53.935: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul 23 12:41:53.940: INFO: Updating deployment nginx-deployment
Jul 23 12:41:53.940: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul 23 12:41:53.967: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 23 12:41:53.983: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 23 12:41:54.123: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-swpxp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-swpxp/deployments/nginx-deployment,UID:dab88532-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368028,Generation:3,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-07-23 12:41:52 +0000 UTC 2020-07-23 12:41:39 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-07-23 12:41:53 +0000 UTC 2020-07-23 12:41:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul 23 12:41:54.317: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-swpxp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-swpxp/replicasets/nginx-deployment-5c98f8fb5,UID:e1ea5f55-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368065,Generation:3,CreationTimestamp:2020-07-23 12:41:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment dab88532-cce1-11ea-b2c9-0242ac120008 0xc001e041f7 0xc001e041f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 23 12:41:54.317: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul 23 12:41:54.318: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-swpxp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-swpxp/replicasets/nginx-deployment-85ddf47c5d,UID:dabe5bda-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368058,Generation:3,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment dab88532-cce1-11ea-b2c9-0242ac120008 0xc001e042b7 0xc001e042b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul 23 12:41:54.330: INFO: Pod "nginx-deployment-5c98f8fb5-42fb7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-42fb7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-42fb7,UID:e33503ec-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368035,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39647 0xc001d39648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d396c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d396e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.330: INFO: Pod "nginx-deployment-5c98f8fb5-72p65" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-72p65,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-72p65,UID:e3388b3e-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368048,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39757 0xc001d39758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d397d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d397f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.330: INFO: Pod "nginx-deployment-5c98f8fb5-9xkgj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9xkgj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-9xkgj,UID:e1eee1d0-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367972,Generation:0,CreationTimestamp:2020-07-23 12:41:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39867 0xc001d39868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d398e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d39900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-23 12:41:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.330: INFO: Pod "nginx-deployment-5c98f8fb5-clr5x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-clr5x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-clr5x,UID:e1ee6556-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367967,Generation:0,CreationTimestamp:2020-07-23 12:41:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d399c0 0xc001d399c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d39a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d39a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 12:41:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.330: INFO: Pod "nginx-deployment-5c98f8fb5-j2ksq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j2ksq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-j2ksq,UID:e338a325-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368049,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39b20 0xc001d39b21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d39ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d39bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-jcv5k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jcv5k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-jcv5k,UID:e20bf50f-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367992,Generation:0,CreationTimestamp:2020-07-23 12:41:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39c37 0xc001d39c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d39cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d39cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 12:41:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-kl27h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kl27h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-kl27h,UID:e332e750-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368069,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39d90 0xc001d39d91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d39e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d39e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-23 12:41:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-kq8m7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kq8m7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-kq8m7,UID:e1eee7c5-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367978,Generation:0,CreationTimestamp:2020-07-23 12:41:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001d39f70 0xc001d39f71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d39ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b36030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 12:41:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-l2drq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l2drq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-l2drq,UID:e3389810-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368060,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001b360f0 0xc001b360f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b36170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b361f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-r2m56" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r2m56,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-r2m56,UID:e3352435-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368032,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001b36267 0xc001b36268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b362e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b36300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-shjq9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-shjq9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-shjq9,UID:e20797c0-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367995,Generation:0,CreationTimestamp:2020-07-23 12:41:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001b36377 0xc001b36378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b363f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b36410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-23 12:41:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-vf9w4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vf9w4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-vf9w4,UID:e338af61-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368059,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001b364d0 0xc001b364d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b36550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b36580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.331: INFO: Pod "nginx-deployment-5c98f8fb5-zhvpx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zhvpx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-5c98f8fb5-zhvpx,UID:e3404f92-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368063,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e1ea5f55-cce1-11ea-b2c9-0242ac120008 0xc001b36617 0xc001b36618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b36690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b366b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-5tkt9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5tkt9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-5tkt9,UID:e332f799-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368070,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b36737 0xc001b36738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b367b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b367d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 12:41:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-bqt7h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqt7h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-bqt7h,UID:e338b13b-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368057,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b36887 0xc001b36888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b37700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b37720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-cmlbh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cmlbh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-cmlbh,UID:e338b571-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368052,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b37797 0xc001b37798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b37810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b37920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-dpxvl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dpxvl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-dpxvl,UID:dac962a5-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367941,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b379d7 0xc001b379d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b37a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b37a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.53,StartTime:2020-07-23 12:41:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f4c947ab299cfa7fd4adb50c4ca8c6ce79a84d72e269ed7eb102da7ffef854a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-g4lvl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g4lvl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-g4lvl,UID:e3356b88-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368044,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b37bc7 0xc001b37bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b37cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b37d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-lgg28" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lgg28,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-lgg28,UID:dad4de03-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367920,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b37e07 0xc001b37e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b37f60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b37f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.132,StartTime:2020-07-23 12:41:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7565b9f770072b33f168124309fb23069220806266f1e347dcfa33fa665962f8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.332: INFO: Pod "nginx-deployment-85ddf47c5d-lgxtz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lgxtz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-lgxtz,UID:dac96438-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367906,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b140d7 0xc001b140d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b14150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b14170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.130,StartTime:2020-07-23 12:41:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://76e3673b9f1703e2daa147ccf4654b09e33123871b0515643d7cc65737d3e987}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-m8wwq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8wwq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-m8wwq,UID:e338ab16-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368050,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b14237 0xc001b14238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b142b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b142d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-mqnh6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mqnh6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-mqnh6,UID:dac72319-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367908,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b14347 0xc001b14348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b14510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b14530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.51,StartTime:2020-07-23 12:41:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://18e574c14f07ea68d8c8d8a09bcc3d3a2aa0ece500739e0bc79e769d62446a3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-ntww6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntww6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-ntww6,UID:e332196d-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368051,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b14707 0xc001b14708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b14820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b14840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-23 12:41:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-pgqtp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pgqtp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-pgqtp,UID:dac72fe8-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367888,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b14907 0xc001b14908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b14f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b154d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.129,StartTime:2020-07-23 12:41:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8225103f068b4305b69913d0b5c67988696bcba06f4a7f595b191914176ad091}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-qszwf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qszwf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-qszwf,UID:e3355ba4-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368038,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b15667 0xc001b15668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b157c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.333: INFO: Pod "nginx-deployment-85ddf47c5d-svwvq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-svwvq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-svwvq,UID:e338b93c-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368055,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b158c7 0xc001b158c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b15940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-thcn2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-thcn2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-thcn2,UID:dad4d825-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367935,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b159f7 0xc001b159f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b15af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.55,StartTime:2020-07-23 12:41:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5d65ac089df6d0139f0546efdebc73eec0b4b92e6ef856f8cacfac89d521787e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-thtzh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-thtzh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-thtzh,UID:dac93b25-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367921,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b15bd7 0xc001b15bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b15cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.52,StartTime:2020-07-23 12:41:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9a512c94267402a5592f8e23bc8237fb6a82256a8893be5b3955914476d25453}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-tm698" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tm698,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-tm698,UID:e338aff9-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368056,Generation:0,CreationTimestamp:2020-07-23 12:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b15da7 0xc001b15da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b15e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-tnsqh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tnsqh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-tnsqh,UID:dac6b6c4-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2367891,Generation:0,CreationTimestamp:2020-07-23 12:41:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc001b15eb7 0xc001b15eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b15f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b15f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.128,StartTime:2020-07-23 12:41:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-23 12:41:44 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f2849fda0475f39a61c925f451c55ab754f211c417ba4be2cdbb6947dd6e23b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-vc2bp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vc2bp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-vc2bp,UID:e332f7fc-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368061,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc000e46017 0xc000e46018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e46090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e460b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-23 12:41:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.334: INFO: Pod "nginx-deployment-85ddf47c5d-vm5m4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vm5m4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-vm5m4,UID:e33560cd-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368045,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc000e46167 0xc000e46168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e461e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e46200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 23 12:41:54.335: INFO: Pod "nginx-deployment-85ddf47c5d-x8qrh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x8qrh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-swpxp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-swpxp/pods/nginx-deployment-85ddf47c5d-x8qrh,UID:e3356ee4-cce1-11ea-b2c9-0242ac120008,ResourceVersion:2368043,Generation:0,CreationTimestamp:2020-07-23 12:41:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d dabe5bda-cce1-11ea-b2c9-0242ac120008 0xc000e46277 0xc000e46278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4dzk2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dzk2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4dzk2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e462f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e46310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-23 12:41:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:41:54.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-swpxp" for this suite.
Jul 23 12:42:22.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:42:22.574: INFO: namespace: e2e-tests-deployment-swpxp, resource: bindings, ignored listing per whitelist
Jul 23 12:42:22.586: INFO: namespace e2e-tests-deployment-swpxp deletion completed in 28.175127285s

• [SLOW TEST:42.930 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:42:22.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 23 12:42:22.689: INFO: Waiting up to 5m0s for pod "pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-jj549" to be "success or failure"
Jul 23 12:42:22.693: INFO: Pod "pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281319ms
Jul 23 12:42:24.697: INFO: Pod "pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008161984s
Jul 23 12:42:26.702: INFO: Pod "pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012938678s
STEP: Saw pod success
Jul 23 12:42:26.702: INFO: Pod "pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:42:26.705: INFO: Trying to get logs from node hunter-worker2 pod pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:42:26.740: INFO: Waiting for pod pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b to disappear
Jul 23 12:42:26.759: INFO: Pod pod-f44f1d1f-cce1-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:42:26.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jj549" for this suite.
Jul 23 12:42:32.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:42:32.784: INFO: namespace: e2e-tests-emptydir-jj549, resource: bindings, ignored listing per whitelist
Jul 23 12:42:32.854: INFO: namespace e2e-tests-emptydir-jj549 deletion completed in 6.091646192s

• [SLOW TEST:10.268 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:42:32.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 23 12:42:32.994: INFO: Waiting up to 5m0s for pod "pod-fa6e294a-cce1-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-4l5t7" to be "success or failure"
Jul 23 12:42:33.037: INFO: Pod "pod-fa6e294a-cce1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.691705ms
Jul 23 12:42:35.041: INFO: Pod "pod-fa6e294a-cce1-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04677113s
Jul 23 12:42:37.046: INFO: Pod "pod-fa6e294a-cce1-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051405195s
STEP: Saw pod success
Jul 23 12:42:37.046: INFO: Pod "pod-fa6e294a-cce1-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:42:37.051: INFO: Trying to get logs from node hunter-worker2 pod pod-fa6e294a-cce1-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:42:37.083: INFO: Waiting for pod pod-fa6e294a-cce1-11ea-92a5-0242ac11000b to disappear
Jul 23 12:42:37.095: INFO: Pod pod-fa6e294a-cce1-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:42:37.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4l5t7" for this suite.
Jul 23 12:42:43.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:42:43.190: INFO: namespace: e2e-tests-emptydir-4l5t7, resource: bindings, ignored listing per whitelist
Jul 23 12:42:43.210: INFO: namespace e2e-tests-emptydir-4l5t7 deletion completed in 6.111271736s

• [SLOW TEST:10.356 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:42:43.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0098e9f6-cce2-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:42:43.328: INFO: Waiting up to 5m0s for pod "pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-92jpv" to be "success or failure"
Jul 23 12:42:43.331: INFO: Pod "pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.107697ms
Jul 23 12:42:45.335: INFO: Pod "pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006984044s
Jul 23 12:42:47.339: INFO: Pod "pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01049331s
STEP: Saw pod success
Jul 23 12:42:47.339: INFO: Pod "pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:42:47.341: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 23 12:42:47.575: INFO: Waiting for pod pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:42:47.622: INFO: Pod pod-secrets-009d8577-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:42:47.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-92jpv" for this suite.
Jul 23 12:42:53.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:42:53.791: INFO: namespace: e2e-tests-secrets-92jpv, resource: bindings, ignored listing per whitelist
Jul 23 12:42:53.804: INFO: namespace e2e-tests-secrets-92jpv deletion completed in 6.17448029s

• [SLOW TEST:10.593 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:42:53.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 23 12:42:53.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bfkdn'
Jul 23 12:42:56.563: INFO: stderr: ""
Jul 23 12:42:56.563: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jul 23 12:42:56.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bfkdn'
Jul 23 12:43:00.669: INFO: stderr: ""
Jul 23 12:43:00.669: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:43:00.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bfkdn" for this suite.
Jul 23 12:43:06.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:43:06.751: INFO: namespace: e2e-tests-kubectl-bfkdn, resource: bindings, ignored listing per whitelist
Jul 23 12:43:06.769: INFO: namespace e2e-tests-kubectl-bfkdn deletion completed in 6.096207884s

• [SLOW TEST:12.966 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:43:06.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 23 12:43:06.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w9dlq'
Jul 23 12:43:07.121: INFO: stderr: ""
Jul 23 12:43:07.121: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 23 12:43:08.185: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:08.185: INFO: Found 0 / 1
Jul 23 12:43:09.126: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:09.126: INFO: Found 0 / 1
Jul 23 12:43:10.126: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:10.126: INFO: Found 0 / 1
Jul 23 12:43:11.126: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:11.126: INFO: Found 1 / 1
Jul 23 12:43:11.126: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 23 12:43:11.130: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:11.130: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 23 12:43:11.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fwctq --namespace=e2e-tests-kubectl-w9dlq -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 23 12:43:11.241: INFO: stderr: ""
Jul 23 12:43:11.241: INFO: stdout: "pod/redis-master-fwctq patched\n"
STEP: checking annotations
Jul 23 12:43:11.270: INFO: Selector matched 1 pods for map[app:redis]
Jul 23 12:43:11.270: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:43:11.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w9dlq" for this suite.
Jul 23 12:43:33.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:43:33.377: INFO: namespace: e2e-tests-kubectl-w9dlq, resource: bindings, ignored listing per whitelist
Jul 23 12:43:33.388: INFO: namespace e2e-tests-kubectl-w9dlq deletion completed in 22.08898961s

• [SLOW TEST:26.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:43:33.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:43:33.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-4gnhq" to be "success or failure"
Jul 23 12:43:33.521: INFO: Pod "downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239953ms
Jul 23 12:43:35.525: INFO: Pod "downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008180412s
Jul 23 12:43:37.530: INFO: Pod "downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012508835s
STEP: Saw pod success
Jul 23 12:43:37.530: INFO: Pod "downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:43:37.533: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:43:37.596: INFO: Waiting for pod downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:43:37.599: INFO: Pod downwardapi-volume-1e870dbc-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:43:37.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4gnhq" for this suite.
Jul 23 12:43:43.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:43:43.671: INFO: namespace: e2e-tests-downward-api-4gnhq, resource: bindings, ignored listing per whitelist
Jul 23 12:43:43.690: INFO: namespace e2e-tests-downward-api-4gnhq deletion completed in 6.088186406s

• [SLOW TEST:10.301 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:43:43.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 23 12:43:43.811: INFO: Waiting up to 5m0s for pod "downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-6xtpr" to be "success or failure"
Jul 23 12:43:43.825: INFO: Pod "downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.743108ms
Jul 23 12:43:45.829: INFO: Pod "downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01792317s
Jul 23 12:43:47.837: INFO: Pod "downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025805055s
STEP: Saw pod success
Jul 23 12:43:47.837: INFO: Pod "downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:43:47.840: INFO: Trying to get logs from node hunter-worker2 pod downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 23 12:43:47.865: INFO: Waiting for pod downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:43:47.869: INFO: Pod downward-api-24aaa325-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:43:47.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6xtpr" for this suite.
Jul 23 12:43:53.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:43:53.918: INFO: namespace: e2e-tests-downward-api-6xtpr, resource: bindings, ignored listing per whitelist
Jul 23 12:43:53.966: INFO: namespace e2e-tests-downward-api-6xtpr deletion completed in 6.090542083s

• [SLOW TEST:10.276 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:43:53.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:43:54.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ntsvf" for this suite.
Jul 23 12:44:16.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:44:16.243: INFO: namespace: e2e-tests-pods-ntsvf, resource: bindings, ignored listing per whitelist
Jul 23 12:44:16.277: INFO: namespace e2e-tests-pods-ntsvf deletion completed in 22.113764907s

• [SLOW TEST:22.311 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:44:16.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul 23 12:44:16.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:16.667: INFO: stderr: ""
Jul 23 12:44:16.667: INFO: stdout: "pod/pause created\n"
Jul 23 12:44:16.667: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 23 12:44:16.667: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-g72jw" to be "running and ready"
Jul 23 12:44:16.679: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348677ms
Jul 23 12:44:18.766: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099246438s
Jul 23 12:44:20.797: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.129610615s
Jul 23 12:44:20.797: INFO: Pod "pause" satisfied condition "running and ready"
Jul 23 12:44:20.797: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 23 12:44:20.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:20.926: INFO: stderr: ""
Jul 23 12:44:20.926: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 23 12:44:20.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:21.018: INFO: stderr: ""
Jul 23 12:44:21.018: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 23 12:44:21.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:21.138: INFO: stderr: ""
Jul 23 12:44:21.138: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 23 12:44:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:21.233: INFO: stderr: ""
Jul 23 12:44:21.233: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul 23 12:44:21.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:21.342: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:44:21.343: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 23 12:44:21.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-g72jw'
Jul 23 12:44:21.429: INFO: stderr: "No resources found.\n"
Jul 23 12:44:21.429: INFO: stdout: ""
Jul 23 12:44:21.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-g72jw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 23 12:44:21.520: INFO: stderr: ""
Jul 23 12:44:21.520: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:44:21.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g72jw" for this suite.
Jul 23 12:44:27.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:44:27.784: INFO: namespace: e2e-tests-kubectl-g72jw, resource: bindings, ignored listing per whitelist
Jul 23 12:44:27.840: INFO: namespace e2e-tests-kubectl-g72jw deletion completed in 6.316893794s

• [SLOW TEST:11.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:44:27.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-3efd6fce-cce2-11ea-92a5-0242ac11000b
STEP: Creating secret with name secret-projected-all-test-volume-3efd6fae-cce2-11ea-92a5-0242ac11000b
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 23 12:44:27.992: INFO: Waiting up to 5m0s for pod "projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-projected-85t7f" to be "success or failure"
Jul 23 12:44:28.009: INFO: Pod "projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.605853ms
Jul 23 12:44:30.012: INFO: Pod "projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019984132s
Jul 23 12:44:32.016: INFO: Pod "projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023879593s
STEP: Saw pod success
Jul 23 12:44:32.016: INFO: Pod "projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:44:32.019: INFO: Trying to get logs from node hunter-worker pod projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b container projected-all-volume-test: 
STEP: delete the pod
Jul 23 12:44:32.075: INFO: Waiting for pod projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:44:32.180: INFO: Pod projected-volume-3efd6f59-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:44:32.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-85t7f" for this suite.
Jul 23 12:44:38.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:44:38.257: INFO: namespace: e2e-tests-projected-85t7f, resource: bindings, ignored listing per whitelist
Jul 23 12:44:38.323: INFO: namespace e2e-tests-projected-85t7f deletion completed in 6.138515653s

• [SLOW TEST:10.483 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:44:38.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 23 12:44:38.427: INFO: Waiting up to 5m0s for pod "pod-45342b08-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-emptydir-lzh2v" to be "success or failure"
Jul 23 12:44:38.433: INFO: Pod "pod-45342b08-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274648ms
Jul 23 12:44:40.438: INFO: Pod "pod-45342b08-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01035045s
Jul 23 12:44:42.442: INFO: Pod "pod-45342b08-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014520654s
STEP: Saw pod success
Jul 23 12:44:42.442: INFO: Pod "pod-45342b08-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:44:42.445: INFO: Trying to get logs from node hunter-worker pod pod-45342b08-cce2-11ea-92a5-0242ac11000b container test-container: 
STEP: delete the pod
Jul 23 12:44:42.464: INFO: Waiting for pod pod-45342b08-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:44:42.479: INFO: Pod pod-45342b08-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:44:42.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lzh2v" for this suite.
Jul 23 12:44:48.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:44:48.554: INFO: namespace: e2e-tests-emptydir-lzh2v, resource: bindings, ignored listing per whitelist
Jul 23 12:44:48.630: INFO: namespace e2e-tests-emptydir-lzh2v deletion completed in 6.147918143s

• [SLOW TEST:10.306 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:44:48.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4b5e0cfc-cce2-11ea-92a5-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 23 12:44:48.747: INFO: Waiting up to 5m0s for pod "pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-secrets-f6w2x" to be "success or failure"
Jul 23 12:44:48.763: INFO: Pod "pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.762562ms
Jul 23 12:44:50.827: INFO: Pod "pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080260782s
Jul 23 12:44:52.875: INFO: Pod "pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128516471s
STEP: Saw pod success
Jul 23 12:44:52.875: INFO: Pod "pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:44:52.878: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b container secret-env-test: 
STEP: delete the pod
Jul 23 12:44:52.909: INFO: Waiting for pod pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:44:52.919: INFO: Pod pod-secrets-4b5ea52d-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:44:52.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-f6w2x" for this suite.
Jul 23 12:44:58.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:44:58.973: INFO: namespace: e2e-tests-secrets-f6w2x, resource: bindings, ignored listing per whitelist
Jul 23 12:44:59.015: INFO: namespace e2e-tests-secrets-f6w2x deletion completed in 6.091212948s

• [SLOW TEST:10.385 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:44:59.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 23 12:44:59.134: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:45:06.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4nmf5" for this suite.
Jul 23 12:45:12.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:45:12.285: INFO: namespace: e2e-tests-init-container-4nmf5, resource: bindings, ignored listing per whitelist
Jul 23 12:45:12.311: INFO: namespace e2e-tests-init-container-4nmf5 deletion completed in 6.073727361s

• [SLOW TEST:13.296 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:45:12.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 23 12:45:12.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b" in namespace "e2e-tests-downward-api-zv98k" to be "success or failure"
Jul 23 12:45:12.446: INFO: Pod "downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.49645ms
Jul 23 12:45:14.450: INFO: Pod "downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009743977s
Jul 23 12:45:16.454: INFO: Pod "downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013612233s
STEP: Saw pod success
Jul 23 12:45:16.454: INFO: Pod "downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b" satisfied condition "success or failure"
Jul 23 12:45:16.456: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b container client-container: 
STEP: delete the pod
Jul 23 12:45:16.472: INFO: Waiting for pod downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b to disappear
Jul 23 12:45:16.482: INFO: Pod downwardapi-volume-5979ed5f-cce2-11ea-92a5-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:45:16.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zv98k" for this suite.
Jul 23 12:45:22.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:45:22.573: INFO: namespace: e2e-tests-downward-api-zv98k, resource: bindings, ignored listing per whitelist
Jul 23 12:45:22.594: INFO: namespace e2e-tests-downward-api-zv98k deletion completed in 6.107590862s

• [SLOW TEST:10.282 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:45:22.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul 23 12:45:22.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:22.939: INFO: stderr: ""
Jul 23 12:45:22.939: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 23 12:45:22.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:23.060: INFO: stderr: ""
Jul 23 12:45:23.060: INFO: stdout: "update-demo-nautilus-txxpb update-demo-nautilus-xxq9b "
Jul 23 12:45:23.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-txxpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:23.188: INFO: stderr: ""
Jul 23 12:45:23.188: INFO: stdout: ""
Jul 23 12:45:23.188: INFO: update-demo-nautilus-txxpb is created but not running
Jul 23 12:45:28.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:28.301: INFO: stderr: ""
Jul 23 12:45:28.301: INFO: stdout: "update-demo-nautilus-txxpb update-demo-nautilus-xxq9b "
Jul 23 12:45:28.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-txxpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:28.398: INFO: stderr: ""
Jul 23 12:45:28.398: INFO: stdout: "true"
Jul 23 12:45:28.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-txxpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:28.498: INFO: stderr: ""
Jul 23 12:45:28.498: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:45:28.498: INFO: validating pod update-demo-nautilus-txxpb
Jul 23 12:45:28.502: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:45:28.502: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:45:28.502: INFO: update-demo-nautilus-txxpb is verified up and running
Jul 23 12:45:28.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xxq9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:28.603: INFO: stderr: ""
Jul 23 12:45:28.603: INFO: stdout: "true"
Jul 23 12:45:28.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xxq9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:28.705: INFO: stderr: ""
Jul 23 12:45:28.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:45:28.705: INFO: validating pod update-demo-nautilus-xxq9b
Jul 23 12:45:28.708: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:45:28.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:45:28.708: INFO: update-demo-nautilus-xxq9b is verified up and running
STEP: rolling-update to new replication controller
Jul 23 12:45:28.710: INFO: scanned /root for discovery docs: 
Jul 23 12:45:28.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.326: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 23 12:45:51.326: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 23 12:45:51.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.462: INFO: stderr: ""
Jul 23 12:45:51.462: INFO: stdout: "update-demo-kitten-4kj4v update-demo-kitten-9qvwh "
Jul 23 12:45:51.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4kj4v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.561: INFO: stderr: ""
Jul 23 12:45:51.562: INFO: stdout: "true"
Jul 23 12:45:51.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4kj4v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.653: INFO: stderr: ""
Jul 23 12:45:51.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 23 12:45:51.653: INFO: validating pod update-demo-kitten-4kj4v
Jul 23 12:45:51.667: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 23 12:45:51.667: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 23 12:45:51.667: INFO: update-demo-kitten-4kj4v is verified up and running
Jul 23 12:45:51.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9qvwh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.752: INFO: stderr: ""
Jul 23 12:45:51.752: INFO: stdout: "true"
Jul 23 12:45:51.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9qvwh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tnwxb'
Jul 23 12:45:51.837: INFO: stderr: ""
Jul 23 12:45:51.837: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 23 12:45:51.837: INFO: validating pod update-demo-kitten-9qvwh
Jul 23 12:45:51.842: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 23 12:45:51.842: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 23 12:45:51.842: INFO: update-demo-kitten-9qvwh is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:45:51.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tnwxb" for this suite.
Jul 23 12:46:13.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:46:13.959: INFO: namespace: e2e-tests-kubectl-tnwxb, resource: bindings, ignored listing per whitelist
Jul 23 12:46:14.019: INFO: namespace e2e-tests-kubectl-tnwxb deletion completed in 22.172570707s

• [SLOW TEST:51.426 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:46:14.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:46:14.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:46:18.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6k266" for this suite.
Jul 23 12:47:08.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:47:08.363: INFO: namespace: e2e-tests-pods-6k266, resource: bindings, ignored listing per whitelist
Jul 23 12:47:08.392: INFO: namespace e2e-tests-pods-6k266 deletion completed in 50.105110083s

• [SLOW TEST:54.372 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:47:08.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul 23 12:47:08.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:08.791: INFO: stderr: ""
Jul 23 12:47:08.791: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 23 12:47:08.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:08.973: INFO: stderr: ""
Jul 23 12:47:08.973: INFO: stdout: "update-demo-nautilus-4zlkk update-demo-nautilus-sx7n6 "
Jul 23 12:47:08.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zlkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:09.082: INFO: stderr: ""
Jul 23 12:47:09.082: INFO: stdout: ""
Jul 23 12:47:09.082: INFO: update-demo-nautilus-4zlkk is created but not running
Jul 23 12:47:14.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:14.173: INFO: stderr: ""
Jul 23 12:47:14.173: INFO: stdout: "update-demo-nautilus-4zlkk update-demo-nautilus-sx7n6 "
Jul 23 12:47:14.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zlkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:14.274: INFO: stderr: ""
Jul 23 12:47:14.274: INFO: stdout: ""
Jul 23 12:47:14.274: INFO: update-demo-nautilus-4zlkk is created but not running
Jul 23 12:47:19.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:19.385: INFO: stderr: ""
Jul 23 12:47:19.385: INFO: stdout: "update-demo-nautilus-4zlkk update-demo-nautilus-sx7n6 "
Jul 23 12:47:19.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zlkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:19.482: INFO: stderr: ""
Jul 23 12:47:19.482: INFO: stdout: "true"
Jul 23 12:47:19.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zlkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:19.588: INFO: stderr: ""
Jul 23 12:47:19.588: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:47:19.588: INFO: validating pod update-demo-nautilus-4zlkk
Jul 23 12:47:19.592: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:47:19.592: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:47:19.592: INFO: update-demo-nautilus-4zlkk is verified up and running
Jul 23 12:47:19.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:19.687: INFO: stderr: ""
Jul 23 12:47:19.687: INFO: stdout: "true"
Jul 23 12:47:19.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:19.784: INFO: stderr: ""
Jul 23 12:47:19.784: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:47:19.784: INFO: validating pod update-demo-nautilus-sx7n6
Jul 23 12:47:19.788: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:47:19.788: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:47:19.788: INFO: update-demo-nautilus-sx7n6 is verified up and running
STEP: scaling down the replication controller
Jul 23 12:47:19.789: INFO: scanned /root for discovery docs: 
Jul 23 12:47:19.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:20.942: INFO: stderr: ""
Jul 23 12:47:20.942: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 23 12:47:20.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:21.036: INFO: stderr: ""
Jul 23 12:47:21.036: INFO: stdout: "update-demo-nautilus-4zlkk update-demo-nautilus-sx7n6 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 23 12:47:26.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:26.144: INFO: stderr: ""
Jul 23 12:47:26.144: INFO: stdout: "update-demo-nautilus-sx7n6 "
Jul 23 12:47:26.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:26.239: INFO: stderr: ""
Jul 23 12:47:26.239: INFO: stdout: "true"
Jul 23 12:47:26.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:26.334: INFO: stderr: ""
Jul 23 12:47:26.334: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:47:26.334: INFO: validating pod update-demo-nautilus-sx7n6
Jul 23 12:47:26.338: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:47:26.338: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:47:26.338: INFO: update-demo-nautilus-sx7n6 is verified up and running
STEP: scaling up the replication controller
Jul 23 12:47:26.340: INFO: scanned /root for discovery docs: 
Jul 23 12:47:26.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:27.524: INFO: stderr: ""
Jul 23 12:47:27.524: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 23 12:47:27.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:27.612: INFO: stderr: ""
Jul 23 12:47:27.612: INFO: stdout: "update-demo-nautilus-mqfrc update-demo-nautilus-sx7n6 "
Jul 23 12:47:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqfrc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:27.696: INFO: stderr: ""
Jul 23 12:47:27.696: INFO: stdout: ""
Jul 23 12:47:27.696: INFO: update-demo-nautilus-mqfrc is created but not running
Jul 23 12:47:32.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:32.807: INFO: stderr: ""
Jul 23 12:47:32.807: INFO: stdout: "update-demo-nautilus-mqfrc update-demo-nautilus-sx7n6 "
Jul 23 12:47:32.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqfrc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:32.896: INFO: stderr: ""
Jul 23 12:47:32.896: INFO: stdout: "true"
Jul 23 12:47:32.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqfrc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:32.997: INFO: stderr: ""
Jul 23 12:47:32.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:47:32.997: INFO: validating pod update-demo-nautilus-mqfrc
Jul 23 12:47:33.000: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:47:33.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:47:33.000: INFO: update-demo-nautilus-mqfrc is verified up and running
Jul 23 12:47:33.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:33.091: INFO: stderr: ""
Jul 23 12:47:33.091: INFO: stdout: "true"
Jul 23 12:47:33.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sx7n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:33.187: INFO: stderr: ""
Jul 23 12:47:33.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 23 12:47:33.187: INFO: validating pod update-demo-nautilus-sx7n6
Jul 23 12:47:33.190: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 23 12:47:33.191: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 23 12:47:33.191: INFO: update-demo-nautilus-sx7n6 is verified up and running
STEP: using delete to clean up resources
Jul 23 12:47:33.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:33.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 23 12:47:33.298: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 23 12:47:33.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tl2th'
Jul 23 12:47:33.460: INFO: stderr: "No resources found.\n"
Jul 23 12:47:33.460: INFO: stdout: ""
Jul 23 12:47:33.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tl2th -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 23 12:47:33.565: INFO: stderr: ""
Jul 23 12:47:33.565: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:47:33.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tl2th" for this suite.
Jul 23 12:48:01.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:48:01.676: INFO: namespace: e2e-tests-kubectl-tl2th, resource: bindings, ignored listing per whitelist
Jul 23 12:48:01.711: INFO: namespace e2e-tests-kubectl-tl2th deletion completed in 28.141807977s

• [SLOW TEST:53.319 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:48:01.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 23 12:48:01.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul 23 12:48:02.018: INFO: stderr: ""
Jul 23 12:48:02.018: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:48:02.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vhkfg" for this suite.
Jul 23 12:48:08.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:48:08.060: INFO: namespace: e2e-tests-kubectl-vhkfg, resource: bindings, ignored listing per whitelist
Jul 23 12:48:08.109: INFO: namespace e2e-tests-kubectl-vhkfg deletion completed in 6.085321s

• [SLOW TEST:6.398 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 23 12:48:08.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c24d6341-cce2-11ea-92a5-0242ac11000b
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c24d6341-cce2-11ea-92a5-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 23 12:49:16.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4dj68" for this suite.
Jul 23 12:49:38.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 23 12:49:38.629: INFO: namespace: e2e-tests-configmap-4dj68, resource: bindings, ignored listing per whitelist
Jul 23 12:49:38.688: INFO: namespace e2e-tests-configmap-4dj68 deletion completed in 22.095446355s

• [SLOW TEST:90.579 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSJul 23 12:49:38.688: INFO: Running AfterSuite actions on all nodes
Jul 23 12:49:38.688: INFO: Running AfterSuite actions on node 1
Jul 23 12:49:38.688: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 7364.982 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS