I0827 01:04:57.545757 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0827 01:04:57.545992 6 e2e.go:109] Starting e2e run "5a71739a-5d76-42f8-a989-b0b711f002ac" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598490296 - Will randomize all specs Will run 278 of 4844 specs Aug 27 01:04:57.595: INFO: >>> kubeConfig: /root/.kube/config Aug 27 01:04:57.599: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 01:04:57.615: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 01:04:57.644: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 01:04:57.644: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 01:04:57.644: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 01:04:57.652: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 01:04:57.652: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 01:04:57.652: INFO: e2e test version: v1.17.11 Aug 27 01:04:57.653: INFO: kube-apiserver version: v1.17.5 Aug 27 01:04:57.653: INFO: >>> kubeConfig: /root/.kube/config Aug 27 01:04:57.656: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:04:57.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job Aug 27 01:04:57.709: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8989, will wait for the garbage collector to delete the pods Aug 27 01:05:07.364: INFO: Deleting Job.batch foo took: 10.945692ms Aug 27 01:05:07.864: INFO: Terminating Job.batch foo pods took: 500.290367ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:05:51.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8989" for this suite. • [SLOW TEST:54.130 seconds] [sig-apps] Job /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":1,"skipped":23,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:05:51.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 27 01:06:04.019: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:04.028: INFO: Pod pod-with-poststart-http-hook still exists Aug 27 01:06:06.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:06.033: INFO: Pod pod-with-poststart-http-hook still exists Aug 27 01:06:08.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:08.033: INFO: Pod pod-with-poststart-http-hook still exists Aug 27 01:06:10.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:10.032: INFO: Pod pod-with-poststart-http-hook still exists Aug 27 01:06:12.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:12.286: INFO: Pod pod-with-poststart-http-hook still exists Aug 27 01:06:14.028: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 27 01:06:14.057: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:14.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3409" for this suite. • [SLOW TEST:22.711 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":33,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:14.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-6813 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6813 to expose endpoints map[] Aug 27 01:06:14.872: INFO: Get endpoints failed (29.078407ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 27 01:06:15.969: INFO: successfully validated that service multi-endpoint-test in namespace services-6813 exposes endpoints map[] (1.125889826s elapsed) STEP: Creating pod pod1 in namespace services-6813 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6813 to expose endpoints map[pod1:[100]] Aug 27 01:06:20.295: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.32044398s elapsed, will retry) Aug 27 01:06:23.864: INFO: successfully validated that service multi-endpoint-test in namespace services-6813 exposes endpoints map[pod1:[100]] (7.889283635s elapsed) STEP: Creating pod pod2 in namespace services-6813 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6813 to expose endpoints map[pod1:[100] pod2:[101]] Aug 27 01:06:29.217: INFO: Unexpected endpoints: found map[da694551-c204-4f0b-91a6-71cd7e789f60:[100]], expected map[pod1:[100] pod2:[101]] (5.348927416s elapsed, will retry) Aug 27 01:06:30.671: INFO: successfully validated that service multi-endpoint-test in namespace services-6813 exposes endpoints map[pod1:[100] pod2:[101]] (6.803316615s elapsed) STEP: Deleting pod pod1 in namespace services-6813 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6813 to expose endpoints map[pod2:[101]] Aug 27 01:06:31.994: INFO: successfully validated that service multi-endpoint-test in namespace services-6813 exposes endpoints map[pod2:[101]] (1.318765205s elapsed) STEP: Deleting pod pod2 in namespace services-6813 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6813 to expose endpoints map[] Aug 27 01:06:32.466: INFO: successfully validated that service multi-endpoint-test in namespace services-6813 exposes endpoints map[] (465.929018ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:33.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6813" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.910 seconds] [sig-network] Services /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":3,"skipped":38,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:33.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6047" for this suite. • [SLOW TEST:13.986 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":4,"skipped":43,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:47.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 27 01:06:47.678: INFO: Waiting up to 5m0s for pod "pod-12cd41cd-672f-4f16-bf1c-d392b7db312e" in namespace "emptydir-9600" to be "success or failure" Aug 27 01:06:47.710: INFO: Pod "pod-12cd41cd-672f-4f16-bf1c-d392b7db312e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.737315ms Aug 27 01:06:49.766: INFO: Pod "pod-12cd41cd-672f-4f16-bf1c-d392b7db312e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088014818s Aug 27 01:06:51.769: INFO: Pod "pod-12cd41cd-672f-4f16-bf1c-d392b7db312e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091197188s STEP: Saw pod success Aug 27 01:06:51.769: INFO: Pod "pod-12cd41cd-672f-4f16-bf1c-d392b7db312e" satisfied condition "success or failure" Aug 27 01:06:51.772: INFO: Trying to get logs from node jerma-worker pod pod-12cd41cd-672f-4f16-bf1c-d392b7db312e container test-container: STEP: delete the pod Aug 27 01:06:51.964: INFO: Waiting for pod pod-12cd41cd-672f-4f16-bf1c-d392b7db312e to disappear Aug 27 01:06:52.030: INFO: Pod pod-12cd41cd-672f-4f16-bf1c-d392b7db312e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:52.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9600" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:52.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:52.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6892" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":6,"skipped":68,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:52.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Aug 27 01:06:53.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f" in namespace "projected-254" to be "success or failure" Aug 27 01:06:53.359: INFO: Pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f": Phase="Pending", Reason="", readiness=false. Elapsed: 112.945178ms Aug 27 01:06:55.363: INFO: Pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116467123s Aug 27 01:06:57.382: INFO: Pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f": Phase="Running", Reason="", readiness=true. Elapsed: 4.135933087s Aug 27 01:06:59.390: INFO: Pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143482012s STEP: Saw pod success Aug 27 01:06:59.390: INFO: Pod "downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f" satisfied condition "success or failure" Aug 27 01:06:59.393: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f container client-container: STEP: delete the pod Aug 27 01:06:59.474: INFO: Waiting for pod downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f to disappear Aug 27 01:06:59.484: INFO: Pod downwardapi-volume-8e7d880a-7569-455c-86ed-3a639d55356f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:06:59.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-254" for this suite. • [SLOW TEST:6.674 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":78,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:06:59.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8348/secret-test-5743b73b-e624-4b66-9a43-246bc88c1905 STEP: Creating a pod to test consume secrets Aug 27 01:06:59.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b" in namespace "secrets-8348" to be "success or failure" Aug 27 01:06:59.634: INFO: Pod "pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.868087ms Aug 27 01:07:01.638: INFO: Pod "pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016916074s Aug 27 01:07:03.642: INFO: Pod "pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020865989s STEP: Saw pod success Aug 27 01:07:03.642: INFO: Pod "pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b" satisfied condition "success or failure" Aug 27 01:07:03.645: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b container env-test: STEP: delete the pod Aug 27 01:07:03.665: INFO: Waiting for pod pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b to disappear Aug 27 01:07:03.687: INFO: Pod pod-configmaps-56d0519c-866b-4a0f-9b06-6319b0756f0b no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:07:03.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8348" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":90,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:07:03.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-c1e79210-a242-4bac-9889-f2278f468260 in namespace container-probe-1664 Aug 27 01:07:07.781: INFO: Started pod liveness-c1e79210-a242-4bac-9889-f2278f468260 in namespace container-probe-1664 STEP: checking the pod's current state and verifying that restartCount is present Aug 27 01:07:07.784: INFO: Initial restart count of pod liveness-c1e79210-a242-4bac-9889-f2278f468260 is 0 Aug 27 01:07:32.034: INFO: Restart count of pod container-probe-1664/liveness-c1e79210-a242-4bac-9889-f2278f468260 is now 1 (24.249463038s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:07:32.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1664" for this suite. • [SLOW TEST:28.503 seconds] [k8s.io] Probing container /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":97,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:07:32.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 27 01:07:33.302: INFO: Waiting up to 5m0s for pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310" in namespace "emptydir-552" to be "success or failure" Aug 27 01:07:33.378: INFO: Pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310": Phase="Pending", Reason="", readiness=false. Elapsed: 76.416884ms Aug 27 01:07:35.402: INFO: Pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099926528s Aug 27 01:07:37.405: INFO: Pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310": Phase="Running", Reason="", readiness=true. Elapsed: 4.103056783s Aug 27 01:07:39.461: INFO: Pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159030362s STEP: Saw pod success Aug 27 01:07:39.461: INFO: Pod "pod-006e1f76-8cc1-44a3-9666-4d360541b310" satisfied condition "success or failure" Aug 27 01:07:39.464: INFO: Trying to get logs from node jerma-worker pod pod-006e1f76-8cc1-44a3-9666-4d360541b310 container test-container: STEP: delete the pod Aug 27 01:07:39.554: INFO: Waiting for pod pod-006e1f76-8cc1-44a3-9666-4d360541b310 to disappear Aug 27 01:07:39.811: INFO: Pod pod-006e1f76-8cc1-44a3-9666-4d360541b310 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Aug 27 01:07:39.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-552" for this suite. • [SLOW TEST:7.703 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":100,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Aug 27 01:07:39.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Aug 27 01:07:40.306: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 27 01:07:40.624: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:07:49.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-74" for this suite.

• [SLOW TEST:8.890 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":12,"skipped":123,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:07:49.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:07:49.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5407'
Aug 27 01:07:58.209: INFO: stderr: ""
Aug 27 01:07:58.209: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 27 01:07:58.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5407'
Aug 27 01:07:59.186: INFO: stderr: ""
Aug 27 01:07:59.186: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 27 01:08:00.283: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:00.283: INFO: Found 0 / 1
Aug 27 01:08:01.431: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:01.431: INFO: Found 0 / 1
Aug 27 01:08:02.326: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:02.326: INFO: Found 0 / 1
Aug 27 01:08:03.197: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:03.198: INFO: Found 0 / 1
Aug 27 01:08:04.189: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:04.189: INFO: Found 1 / 1
Aug 27 01:08:04.189: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 01:08:04.191: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:08:04.191: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 01:08:04.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-c28bn --namespace=kubectl-5407'
Aug 27 01:08:04.310: INFO: stderr: ""
Aug 27 01:08:04.310: INFO: stdout: "Name:         agnhost-master-c28bn\nNamespace:    kubectl-5407\nPriority:     0\nNode:         jerma-worker/172.18.0.6\nStart Time:   Thu, 27 Aug 2020 01:07:58 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.83\nIPs:\n  IP:           10.244.2.83\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://ae1b7312950ef7102b6b65fc279aa20b3252a099be871280e8ce2f7575885987\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 27 Aug 2020 01:08:03 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzvq7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vzvq7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vzvq7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                   Message\n  ----    ------     ----       ----                   -------\n  Normal  Scheduled    default-scheduler      Successfully assigned kubectl-5407/agnhost-master-c28bn to jerma-worker\n  Normal  Pulled     4s         kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-worker  Started container agnhost-master\n"
Aug 27 01:08:04.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5407'
Aug 27 01:08:04.432: INFO: stderr: ""
Aug 27 01:08:04.433: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5407\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: agnhost-master-c28bn\n"
Aug 27 01:08:04.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5407'
Aug 27 01:08:04.685: INFO: stderr: ""
Aug 27 01:08:04.686: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5407\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.98.247.53\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.83:6379\nSession Affinity:  None\nEvents:            \n"
Aug 27 01:08:04.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Aug 27 01:08:04.805: INFO: stderr: ""
Aug 27 01:08:04.805: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:37:06 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Thu, 27 Aug 2020 01:07:56 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 27 Aug 2020 01:07:30 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 27 Aug 2020 01:07:30 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 27 Aug 2020 01:07:30 +0000   Sat, 15 Aug 2020 09:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 27 Aug 2020 01:07:30 +0000   Sat, 15 Aug 2020 09:37:40 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.10\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 e52c45bc589d48d995e8fd79ad5bf250\n  System UUID:                b981bdc7-d264-48ef-ab5e-3308e23aaf13\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-bvrm4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 coredns-6955765f44-db8rh                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     11d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kindnet-j88mt                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      11d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-proxy-hmb6l                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         11d\n  local-path-storage          local-path-provisioner-58f6947c7-p2cqw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 27 01:08:04.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5407'
Aug 27 01:08:05.936: INFO: stderr: ""
Aug 27 01:08:05.936: INFO: stdout: "Name:         kubectl-5407\nLabels:       e2e-framework=kubectl\n              e2e-run=5a71739a-5d76-42f8-a989-b0b711f002ac\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:05.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5407" for this suite.

• [SLOW TEST:16.652 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":13,"skipped":158,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:05.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-pjvw
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 01:08:07.137: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pjvw" in namespace "subpath-9148" to be "success or failure"
Aug 27 01:08:07.332: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Pending", Reason="", readiness=false. Elapsed: 194.196393ms
Aug 27 01:08:09.335: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197256056s
Aug 27 01:08:11.461: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32379268s
Aug 27 01:08:13.465: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 6.327857455s
Aug 27 01:08:15.469: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 8.331848994s
Aug 27 01:08:17.474: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 10.336214964s
Aug 27 01:08:19.479: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 12.341090103s
Aug 27 01:08:21.482: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 14.344824536s
Aug 27 01:08:23.487: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 16.349604543s
Aug 27 01:08:25.491: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 18.353483827s
Aug 27 01:08:27.495: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 20.357487679s
Aug 27 01:08:29.499: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 22.361850834s
Aug 27 01:08:31.503: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Running", Reason="", readiness=true. Elapsed: 24.365551908s
Aug 27 01:08:33.508: INFO: Pod "pod-subpath-test-projected-pjvw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.370204471s
STEP: Saw pod success
Aug 27 01:08:33.508: INFO: Pod "pod-subpath-test-projected-pjvw" satisfied condition "success or failure"
Aug 27 01:08:33.510: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-pjvw container test-container-subpath-projected-pjvw: 
STEP: delete the pod
Aug 27 01:08:33.550: INFO: Waiting for pod pod-subpath-test-projected-pjvw to disappear
Aug 27 01:08:33.562: INFO: Pod pod-subpath-test-projected-pjvw no longer exists
STEP: Deleting pod pod-subpath-test-projected-pjvw
Aug 27 01:08:33.562: INFO: Deleting pod "pod-subpath-test-projected-pjvw" in namespace "subpath-9148"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:34.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9148" for this suite.

• [SLOW TEST:28.767 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":14,"skipped":164,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:34.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:39.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6828" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:39.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 01:08:44.281: INFO: Successfully updated pod "pod-update-activedeadlineseconds-39ea1e1c-85aa-4bf0-9e44-910e60ec493b"
Aug 27 01:08:44.281: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-39ea1e1c-85aa-4bf0-9e44-910e60ec493b" in namespace "pods-4888" to be "terminated due to deadline exceeded"
Aug 27 01:08:44.296: INFO: Pod "pod-update-activedeadlineseconds-39ea1e1c-85aa-4bf0-9e44-910e60ec493b": Phase="Running", Reason="", readiness=true. Elapsed: 14.586206ms
Aug 27 01:08:46.300: INFO: Pod "pod-update-activedeadlineseconds-39ea1e1c-85aa-4bf0-9e44-910e60ec493b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018951854s
Aug 27 01:08:46.300: INFO: Pod "pod-update-activedeadlineseconds-39ea1e1c-85aa-4bf0-9e44-910e60ec493b" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:46.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4888" for this suite.

• [SLOW TEST:6.748 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":203,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:46.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 01:08:46.639: INFO: Waiting up to 5m0s for pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a" in namespace "downward-api-9757" to be "success or failure"
Aug 27 01:08:46.648: INFO: Pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.288248ms
Aug 27 01:08:48.653: INFO: Pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013383233s
Aug 27 01:08:50.657: INFO: Pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a": Phase="Running", Reason="", readiness=true. Elapsed: 4.017382897s
Aug 27 01:08:52.660: INFO: Pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020860205s
STEP: Saw pod success
Aug 27 01:08:52.660: INFO: Pod "downward-api-20d51b0e-486c-4b11-9903-6ca00724467a" satisfied condition "success or failure"
Aug 27 01:08:52.662: INFO: Trying to get logs from node jerma-worker pod downward-api-20d51b0e-486c-4b11-9903-6ca00724467a container dapi-container: 
STEP: delete the pod
Aug 27 01:08:52.702: INFO: Waiting for pod downward-api-20d51b0e-486c-4b11-9903-6ca00724467a to disappear
Aug 27 01:08:52.709: INFO: Pod downward-api-20d51b0e-486c-4b11-9903-6ca00724467a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:52.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9757" for this suite.

• [SLOW TEST:6.407 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":215,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:52.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:08:52.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7" in namespace "projected-9403" to be "success or failure"
Aug 27 01:08:52.806: INFO: Pod "downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.716224ms
Aug 27 01:08:55.016: INFO: Pod "downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212681817s
Aug 27 01:08:57.031: INFO: Pod "downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.227530609s
STEP: Saw pod success
Aug 27 01:08:57.031: INFO: Pod "downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7" satisfied condition "success or failure"
Aug 27 01:08:57.034: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7 container client-container: 
STEP: delete the pod
Aug 27 01:08:57.054: INFO: Waiting for pod downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7 to disappear
Aug 27 01:08:57.057: INFO: Pod downwardapi-volume-8b74745e-2e54-464e-a74d-950e097518c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:08:57.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9403" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":244,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:08:57.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:08:57.515: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6350a7d4-cdd4-4e51-acfc-b8a9fa002dc8" in namespace "security-context-test-5867" to be "success or failure"
Aug 27 01:08:57.531: INFO: Pod "busybox-readonly-false-6350a7d4-cdd4-4e51-acfc-b8a9fa002dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.757121ms
Aug 27 01:08:59.538: INFO: Pod "busybox-readonly-false-6350a7d4-cdd4-4e51-acfc-b8a9fa002dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023755293s
Aug 27 01:09:01.542: INFO: Pod "busybox-readonly-false-6350a7d4-cdd4-4e51-acfc-b8a9fa002dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027743412s
Aug 27 01:09:01.542: INFO: Pod "busybox-readonly-false-6350a7d4-cdd4-4e51-acfc-b8a9fa002dc8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:09:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5867" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":262,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:09:01.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
STEP: creating an pod
Aug 27 01:09:01.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6945 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 27 01:09:01.767: INFO: stderr: ""
Aug 27 01:09:01.767: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Aug 27 01:09:01.767: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 27 01:09:01.767: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6945" to be "running and ready, or succeeded"
Aug 27 01:09:01.770: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074415ms
Aug 27 01:09:03.774: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006593553s
Aug 27 01:09:05.778: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.011157362s
Aug 27 01:09:05.778: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 27 01:09:05.778: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 27 01:09:05.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945'
Aug 27 01:09:05.897: INFO: stderr: ""
Aug 27 01:09:05.898: INFO: stdout: "I0827 01:09:04.726865       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/rhh8 333\nI0827 01:09:04.927009       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/vw2 211\nI0827 01:09:05.127079       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/z98s 402\nI0827 01:09:05.327079       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/2rk 378\nI0827 01:09:05.527074       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/f6hw 566\nI0827 01:09:05.727064       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/dtjh 272\n"
STEP: limiting log lines
Aug 27 01:09:05.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945 --tail=1'
Aug 27 01:09:06.014: INFO: stderr: ""
Aug 27 01:09:06.014: INFO: stdout: "I0827 01:09:05.927124       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/w8h 459\n"
Aug 27 01:09:06.014: INFO: got output "I0827 01:09:05.927124       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/w8h 459\n"
STEP: limiting log bytes
Aug 27 01:09:06.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945 --limit-bytes=1'
Aug 27 01:09:06.130: INFO: stderr: ""
Aug 27 01:09:06.130: INFO: stdout: "I"
Aug 27 01:09:06.130: INFO: got output "I"
STEP: exposing timestamps
Aug 27 01:09:06.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945 --tail=1 --timestamps'
Aug 27 01:09:06.232: INFO: stderr: ""
Aug 27 01:09:06.232: INFO: stdout: "2020-08-27T01:09:06.127190305Z I0827 01:09:06.127026       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/r9ns 592\n"
Aug 27 01:09:06.232: INFO: got output "2020-08-27T01:09:06.127190305Z I0827 01:09:06.127026       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/r9ns 592\n"
STEP: restricting to a time range
Aug 27 01:09:08.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945 --since=1s'
Aug 27 01:09:09.183: INFO: stderr: ""
Aug 27 01:09:09.183: INFO: stdout: "I0827 01:09:08.127098       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/9q4 574\nI0827 01:09:08.327085       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/5dqc 511\nI0827 01:09:08.527027       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/zxx6 239\nI0827 01:09:08.727086       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/pcth 207\nI0827 01:09:08.927044       1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/4rl8 384\nI0827 01:09:09.127136       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/j7jk 288\n"
Aug 27 01:09:09.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6945 --since=24h'
Aug 27 01:09:09.714: INFO: stderr: ""
Aug 27 01:09:09.714: INFO: stdout: "I0827 01:09:04.726865       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/rhh8 333\nI0827 01:09:04.927009       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/vw2 211\nI0827 01:09:05.127079       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/z98s 402\nI0827 01:09:05.327079       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/2rk 378\nI0827 01:09:05.527074       1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/f6hw 566\nI0827 01:09:05.727064       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/dtjh 272\nI0827 01:09:05.927124       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/w8h 459\nI0827 01:09:06.127026       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/r9ns 592\nI0827 01:09:06.327048       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/9b2n 392\nI0827 01:09:06.527011       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/2zj2 500\nI0827 01:09:06.727021       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/5xpf 326\nI0827 01:09:06.927048       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lcr 219\nI0827 01:09:07.127081       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/xqx 217\nI0827 01:09:07.327093       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/v9q9 434\nI0827 01:09:07.527050       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/j4d 237\nI0827 01:09:07.727032       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/tgv 498\nI0827 01:09:07.927046       1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/grw 415\nI0827 01:09:08.127098       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/9q4 574\nI0827 01:09:08.327085       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/5dqc 511\nI0827 01:09:08.527027       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/zxx6 239\nI0827 01:09:08.727086       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/pcth 207\nI0827 01:09:08.927044       1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/4rl8 384\nI0827 01:09:09.127136       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/j7jk 288\nI0827 01:09:09.327067       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/6vnn 219\nI0827 01:09:09.527033       1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/hvfs 242\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Aug 27 01:09:09.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6945'
Aug 27 01:09:21.703: INFO: stderr: ""
Aug 27 01:09:21.703: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:09:21.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6945" for this suite.

• [SLOW TEST:20.160 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":20,"skipped":302,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:09:21.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-2d0dd9e8-8166-47ee-918f-72aa71460f0e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2d0dd9e8-8166-47ee-918f-72aa71460f0e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:11:00.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3221" for this suite.

• [SLOW TEST:99.281 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":345,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:11:00.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0827 01:11:02.529277       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 01:11:02.529: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:11:02.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1658" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":22,"skipped":381,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:11:02.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:11:03.187: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:11:05.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:11:07.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:11:09.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087463, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:11:13.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:11:13.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-502" for this suite.
STEP: Destroying namespace "webhook-502-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.646 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":23,"skipped":384,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:11:15.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8947 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8947;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8947 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8947;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8947.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8947.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8947.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8947.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.180.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.180.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.180.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.180.205_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8947 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8947;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8947 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8947;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8947.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8947.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8947.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8947.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8947.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8947.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 205.180.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.180.205_udp@PTR;check="$$(dig +tcp +noall +answer +search 205.180.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.180.205_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:11:24.519: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.523: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.535: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.561: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.564: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.566: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.569: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.572: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.575: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.578: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.581: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:24.599: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:29.604: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.607: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.613: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.616: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.618: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.620: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.623: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.666: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.669: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.674: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.677: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.679: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.686: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:29.710: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:34.604: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.608: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.623: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.626: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.643: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.645: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.648: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.651: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.653: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.656: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:34.676: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:39.604: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.608: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.612: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.619: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.627: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.629: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.633: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.665: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.668: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.671: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.673: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.676: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.682: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.685: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:39.704: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:44.603: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.611: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.641: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.643: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.645: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.649: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.651: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.655: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.658: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:44.676: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:49.604: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.608: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.623: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.626: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.647: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.650: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.653: INFO: Unable to read jessie_udp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.656: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947 from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.658: INFO: Unable to read jessie_udp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.661: INFO: Unable to read jessie_tcp@dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.666: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc from pod dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae: the server could not find the requested resource (get pods dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae)
Aug 27 01:11:49.681: INFO: Lookups using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8947 wheezy_tcp@dns-test-service.dns-8947 wheezy_udp@dns-test-service.dns-8947.svc wheezy_tcp@dns-test-service.dns-8947.svc wheezy_udp@_http._tcp.dns-test-service.dns-8947.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8947.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8947 jessie_tcp@dns-test-service.dns-8947 jessie_udp@dns-test-service.dns-8947.svc jessie_tcp@dns-test-service.dns-8947.svc jessie_udp@_http._tcp.dns-test-service.dns-8947.svc jessie_tcp@_http._tcp.dns-test-service.dns-8947.svc]

Aug 27 01:11:54.686: INFO: DNS probes using dns-8947/dns-test-a8c96f75-e3f5-4ee5-956b-16966126a8ae succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:11:55.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8947" for this suite.

• [SLOW TEST:40.382 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":24,"skipped":387,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:11:55.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Aug 27 01:11:55.802: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6643" to be "success or failure"
Aug 27 01:11:55.849: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 47.553695ms
Aug 27 01:11:58.030: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227733s
Aug 27 01:12:00.033: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231254418s
Aug 27 01:12:02.036: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.23428757s
STEP: Saw pod success
Aug 27 01:12:02.036: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 27 01:12:02.038: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 27 01:12:02.082: INFO: Waiting for pod pod-host-path-test to disappear
Aug 27 01:12:02.095: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:12:02.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6643" for this suite.

• [SLOW TEST:6.537 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":395,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:12:02.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1287.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1287.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:12:08.246: INFO: DNS probes using dns-1287/dns-test-e5899eb3-ca30-4ff8-bee1-aafe34e38b22 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:12:08.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1287" for this suite.

• [SLOW TEST:6.206 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":26,"skipped":403,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:12:08.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 27 01:12:16.944: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 01:12:16.966: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 01:12:18.966: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 01:12:19.254: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 01:12:20.966: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 01:12:20.972: INFO: Pod pod-with-prestop-http-hook still exists
Aug 27 01:12:22.966: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 27 01:12:22.970: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:12:22.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5024" for this suite.

• [SLOW TEST:14.684 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":407,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:12:22.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:12:23.040: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:12:24.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5933" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":28,"skipped":423,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:12:24.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 01:12:24.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7337'
Aug 27 01:12:24.258: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 01:12:24.258: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Aug 27 01:12:24.290: INFO: scanned /root for discovery docs: 
Aug 27 01:12:24.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7337'
Aug 27 01:12:42.333: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 27 01:12:42.333: INFO: stdout: "Created e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b\nScaling up e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Aug 27 01:12:42.333: INFO: stdout: "Created e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b\nScaling up e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Aug 27 01:12:42.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7337'
Aug 27 01:12:42.638: INFO: stderr: ""
Aug 27 01:12:42.638: INFO: stdout: "e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b-cdph2 "
Aug 27 01:12:42.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b-cdph2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7337'
Aug 27 01:12:42.936: INFO: stderr: ""
Aug 27 01:12:42.936: INFO: stdout: "true"
Aug 27 01:12:42.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b-cdph2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7337'
Aug 27 01:12:43.218: INFO: stderr: ""
Aug 27 01:12:43.218: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Aug 27 01:12:43.218: INFO: e2e-test-httpd-rc-2e8f9a4d54343e98d9827911743bff2b-cdph2 is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Aug 27 01:12:43.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7337'
Aug 27 01:12:43.549: INFO: stderr: ""
Aug 27 01:12:43.549: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:12:43.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7337" for this suite.

• [SLOW TEST:19.868 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":29,"skipped":423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:12:43.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:12:46.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:12:48.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:12:50.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087566, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:12:54.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:13:07.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6447" for this suite.
STEP: Destroying namespace "webhook-6447-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.862 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":30,"skipped":457,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:13:08.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-07c19773-bed8-4c83-9521-ce44a174b515
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:13:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9160" for this suite.

• [SLOW TEST:13.439 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":460,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:13:22.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:13:38.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4399" for this suite.

• [SLOW TEST:16.372 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":32,"skipped":496,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:13:38.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 27 01:13:50.810: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:50.810: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:50.839988       6 log.go:172] (0xc002f78580) (0xc003030d20) Create stream
I0827 01:13:50.840019       6 log.go:172] (0xc002f78580) (0xc003030d20) Stream added, broadcasting: 1
I0827 01:13:50.842689       6 log.go:172] (0xc002f78580) Reply frame received for 1
I0827 01:13:50.842744       6 log.go:172] (0xc002f78580) (0xc0025380a0) Create stream
I0827 01:13:50.842759       6 log.go:172] (0xc002f78580) (0xc0025380a0) Stream added, broadcasting: 3
I0827 01:13:50.843932       6 log.go:172] (0xc002f78580) Reply frame received for 3
I0827 01:13:50.844024       6 log.go:172] (0xc002f78580) (0xc0025381e0) Create stream
I0827 01:13:50.844054       6 log.go:172] (0xc002f78580) (0xc0025381e0) Stream added, broadcasting: 5
I0827 01:13:50.845469       6 log.go:172] (0xc002f78580) Reply frame received for 5
I0827 01:13:50.916056       6 log.go:172] (0xc002f78580) Data frame received for 5
I0827 01:13:50.916085       6 log.go:172] (0xc0025381e0) (5) Data frame handling
I0827 01:13:50.916109       6 log.go:172] (0xc002f78580) Data frame received for 3
I0827 01:13:50.916122       6 log.go:172] (0xc0025380a0) (3) Data frame handling
I0827 01:13:50.916136       6 log.go:172] (0xc0025380a0) (3) Data frame sent
I0827 01:13:50.916159       6 log.go:172] (0xc002f78580) Data frame received for 3
I0827 01:13:50.916169       6 log.go:172] (0xc0025380a0) (3) Data frame handling
I0827 01:13:50.917694       6 log.go:172] (0xc002f78580) Data frame received for 1
I0827 01:13:50.917715       6 log.go:172] (0xc003030d20) (1) Data frame handling
I0827 01:13:50.917725       6 log.go:172] (0xc003030d20) (1) Data frame sent
I0827 01:13:50.917741       6 log.go:172] (0xc002f78580) (0xc003030d20) Stream removed, broadcasting: 1
I0827 01:13:50.917754       6 log.go:172] (0xc002f78580) Go away received
I0827 01:13:50.918153       6 log.go:172] (0xc002f78580) (0xc003030d20) Stream removed, broadcasting: 1
I0827 01:13:50.918174       6 log.go:172] (0xc002f78580) (0xc0025380a0) Stream removed, broadcasting: 3
I0827 01:13:50.918183       6 log.go:172] (0xc002f78580) (0xc0025381e0) Stream removed, broadcasting: 5
Aug 27 01:13:50.918: INFO: Exec stderr: ""
Aug 27 01:13:50.918: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:50.918: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:50.943050       6 log.go:172] (0xc002bee420) (0xc002538460) Create stream
I0827 01:13:50.943086       6 log.go:172] (0xc002bee420) (0xc002538460) Stream added, broadcasting: 1
I0827 01:13:50.945637       6 log.go:172] (0xc002bee420) Reply frame received for 1
I0827 01:13:50.945689       6 log.go:172] (0xc002bee420) (0xc002538640) Create stream
I0827 01:13:50.945707       6 log.go:172] (0xc002bee420) (0xc002538640) Stream added, broadcasting: 3
I0827 01:13:50.946602       6 log.go:172] (0xc002bee420) Reply frame received for 3
I0827 01:13:50.946651       6 log.go:172] (0xc002bee420) (0xc002e22140) Create stream
I0827 01:13:50.946670       6 log.go:172] (0xc002bee420) (0xc002e22140) Stream added, broadcasting: 5
I0827 01:13:50.947557       6 log.go:172] (0xc002bee420) Reply frame received for 5
I0827 01:13:51.019500       6 log.go:172] (0xc002bee420) Data frame received for 5
I0827 01:13:51.019545       6 log.go:172] (0xc002e22140) (5) Data frame handling
I0827 01:13:51.019573       6 log.go:172] (0xc002bee420) Data frame received for 3
I0827 01:13:51.019587       6 log.go:172] (0xc002538640) (3) Data frame handling
I0827 01:13:51.019602       6 log.go:172] (0xc002538640) (3) Data frame sent
I0827 01:13:51.019649       6 log.go:172] (0xc002bee420) Data frame received for 3
I0827 01:13:51.019664       6 log.go:172] (0xc002538640) (3) Data frame handling
I0827 01:13:51.021235       6 log.go:172] (0xc002bee420) Data frame received for 1
I0827 01:13:51.021270       6 log.go:172] (0xc002538460) (1) Data frame handling
I0827 01:13:51.021290       6 log.go:172] (0xc002538460) (1) Data frame sent
I0827 01:13:51.021309       6 log.go:172] (0xc002bee420) (0xc002538460) Stream removed, broadcasting: 1
I0827 01:13:51.021330       6 log.go:172] (0xc002bee420) Go away received
I0827 01:13:51.021487       6 log.go:172] (0xc002bee420) (0xc002538460) Stream removed, broadcasting: 1
I0827 01:13:51.021516       6 log.go:172] (0xc002bee420) (0xc002538640) Stream removed, broadcasting: 3
I0827 01:13:51.021528       6 log.go:172] (0xc002bee420) (0xc002e22140) Stream removed, broadcasting: 5
Aug 27 01:13:51.021: INFO: Exec stderr: ""
Aug 27 01:13:51.021: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.021: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.049280       6 log.go:172] (0xc002a4a420) (0xc001d503c0) Create stream
I0827 01:13:51.049311       6 log.go:172] (0xc002a4a420) (0xc001d503c0) Stream added, broadcasting: 1
I0827 01:13:51.051324       6 log.go:172] (0xc002a4a420) Reply frame received for 1
I0827 01:13:51.051361       6 log.go:172] (0xc002a4a420) (0xc002538780) Create stream
I0827 01:13:51.051374       6 log.go:172] (0xc002a4a420) (0xc002538780) Stream added, broadcasting: 3
I0827 01:13:51.052264       6 log.go:172] (0xc002a4a420) Reply frame received for 3
I0827 01:13:51.052307       6 log.go:172] (0xc002a4a420) (0xc001d50500) Create stream
I0827 01:13:51.052319       6 log.go:172] (0xc002a4a420) (0xc001d50500) Stream added, broadcasting: 5
I0827 01:13:51.053305       6 log.go:172] (0xc002a4a420) Reply frame received for 5
I0827 01:13:51.119615       6 log.go:172] (0xc002a4a420) Data frame received for 5
I0827 01:13:51.119655       6 log.go:172] (0xc001d50500) (5) Data frame handling
I0827 01:13:51.119676       6 log.go:172] (0xc002a4a420) Data frame received for 3
I0827 01:13:51.119687       6 log.go:172] (0xc002538780) (3) Data frame handling
I0827 01:13:51.119697       6 log.go:172] (0xc002538780) (3) Data frame sent
I0827 01:13:51.119713       6 log.go:172] (0xc002a4a420) Data frame received for 3
I0827 01:13:51.119722       6 log.go:172] (0xc002538780) (3) Data frame handling
I0827 01:13:51.120709       6 log.go:172] (0xc002a4a420) Data frame received for 1
I0827 01:13:51.120863       6 log.go:172] (0xc001d503c0) (1) Data frame handling
I0827 01:13:51.120886       6 log.go:172] (0xc001d503c0) (1) Data frame sent
I0827 01:13:51.120901       6 log.go:172] (0xc002a4a420) (0xc001d503c0) Stream removed, broadcasting: 1
I0827 01:13:51.120928       6 log.go:172] (0xc002a4a420) Go away received
I0827 01:13:51.121082       6 log.go:172] (0xc002a4a420) (0xc001d503c0) Stream removed, broadcasting: 1
I0827 01:13:51.121109       6 log.go:172] (0xc002a4a420) (0xc002538780) Stream removed, broadcasting: 3
I0827 01:13:51.121132       6 log.go:172] (0xc002a4a420) (0xc001d50500) Stream removed, broadcasting: 5
Aug 27 01:13:51.121: INFO: Exec stderr: ""
Aug 27 01:13:51.121: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.121: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.147695       6 log.go:172] (0xc003166000) (0xc002b29220) Create stream
I0827 01:13:51.147733       6 log.go:172] (0xc003166000) (0xc002b29220) Stream added, broadcasting: 1
I0827 01:13:51.149900       6 log.go:172] (0xc003166000) Reply frame received for 1
I0827 01:13:51.149923       6 log.go:172] (0xc003166000) (0xc003030dc0) Create stream
I0827 01:13:51.149931       6 log.go:172] (0xc003166000) (0xc003030dc0) Stream added, broadcasting: 3
I0827 01:13:51.150671       6 log.go:172] (0xc003166000) Reply frame received for 3
I0827 01:13:51.150704       6 log.go:172] (0xc003166000) (0xc001d506e0) Create stream
I0827 01:13:51.150717       6 log.go:172] (0xc003166000) (0xc001d506e0) Stream added, broadcasting: 5
I0827 01:13:51.151574       6 log.go:172] (0xc003166000) Reply frame received for 5
I0827 01:13:51.231230       6 log.go:172] (0xc003166000) Data frame received for 5
I0827 01:13:51.231288       6 log.go:172] (0xc001d506e0) (5) Data frame handling
I0827 01:13:51.231347       6 log.go:172] (0xc003166000) Data frame received for 3
I0827 01:13:51.231372       6 log.go:172] (0xc003030dc0) (3) Data frame handling
I0827 01:13:51.231408       6 log.go:172] (0xc003030dc0) (3) Data frame sent
I0827 01:13:51.231430       6 log.go:172] (0xc003166000) Data frame received for 3
I0827 01:13:51.231443       6 log.go:172] (0xc003030dc0) (3) Data frame handling
I0827 01:13:51.232943       6 log.go:172] (0xc003166000) Data frame received for 1
I0827 01:13:51.232959       6 log.go:172] (0xc002b29220) (1) Data frame handling
I0827 01:13:51.232967       6 log.go:172] (0xc002b29220) (1) Data frame sent
I0827 01:13:51.232976       6 log.go:172] (0xc003166000) (0xc002b29220) Stream removed, broadcasting: 1
I0827 01:13:51.233065       6 log.go:172] (0xc003166000) (0xc002b29220) Stream removed, broadcasting: 1
I0827 01:13:51.233077       6 log.go:172] (0xc003166000) (0xc003030dc0) Stream removed, broadcasting: 3
I0827 01:13:51.233196       6 log.go:172] (0xc003166000) Go away received
I0827 01:13:51.233261       6 log.go:172] (0xc003166000) (0xc001d506e0) Stream removed, broadcasting: 5
Aug 27 01:13:51.233: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 27 01:13:51.233: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.233: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.268270       6 log.go:172] (0xc00180e000) (0xc002e221e0) Create stream
I0827 01:13:51.268304       6 log.go:172] (0xc00180e000) (0xc002e221e0) Stream added, broadcasting: 1
I0827 01:13:51.270361       6 log.go:172] (0xc00180e000) Reply frame received for 1
I0827 01:13:51.270399       6 log.go:172] (0xc00180e000) (0xc002538820) Create stream
I0827 01:13:51.270411       6 log.go:172] (0xc00180e000) (0xc002538820) Stream added, broadcasting: 3
I0827 01:13:51.271104       6 log.go:172] (0xc00180e000) Reply frame received for 3
I0827 01:13:51.271129       6 log.go:172] (0xc00180e000) (0xc002b29400) Create stream
I0827 01:13:51.271142       6 log.go:172] (0xc00180e000) (0xc002b29400) Stream added, broadcasting: 5
I0827 01:13:51.271821       6 log.go:172] (0xc00180e000) Reply frame received for 5
I0827 01:13:51.346308       6 log.go:172] (0xc00180e000) Data frame received for 5
I0827 01:13:51.346344       6 log.go:172] (0xc002b29400) (5) Data frame handling
I0827 01:13:51.346376       6 log.go:172] (0xc00180e000) Data frame received for 3
I0827 01:13:51.346392       6 log.go:172] (0xc002538820) (3) Data frame handling
I0827 01:13:51.346422       6 log.go:172] (0xc002538820) (3) Data frame sent
I0827 01:13:51.346435       6 log.go:172] (0xc00180e000) Data frame received for 3
I0827 01:13:51.346447       6 log.go:172] (0xc002538820) (3) Data frame handling
I0827 01:13:51.347860       6 log.go:172] (0xc00180e000) Data frame received for 1
I0827 01:13:51.347884       6 log.go:172] (0xc002e221e0) (1) Data frame handling
I0827 01:13:51.347899       6 log.go:172] (0xc002e221e0) (1) Data frame sent
I0827 01:13:51.347914       6 log.go:172] (0xc00180e000) (0xc002e221e0) Stream removed, broadcasting: 1
I0827 01:13:51.348017       6 log.go:172] (0xc00180e000) (0xc002e221e0) Stream removed, broadcasting: 1
I0827 01:13:51.348036       6 log.go:172] (0xc00180e000) (0xc002538820) Stream removed, broadcasting: 3
I0827 01:13:51.348280       6 log.go:172] (0xc00180e000) Go away received
I0827 01:13:51.348319       6 log.go:172] (0xc00180e000) (0xc002b29400) Stream removed, broadcasting: 5
Aug 27 01:13:51.348: INFO: Exec stderr: ""
Aug 27 01:13:51.348: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.348: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.374114       6 log.go:172] (0xc002beed10) (0xc002538960) Create stream
I0827 01:13:51.374140       6 log.go:172] (0xc002beed10) (0xc002538960) Stream added, broadcasting: 1
I0827 01:13:51.376347       6 log.go:172] (0xc002beed10) Reply frame received for 1
I0827 01:13:51.376392       6 log.go:172] (0xc002beed10) (0xc001d50820) Create stream
I0827 01:13:51.376408       6 log.go:172] (0xc002beed10) (0xc001d50820) Stream added, broadcasting: 3
I0827 01:13:51.377339       6 log.go:172] (0xc002beed10) Reply frame received for 3
I0827 01:13:51.377379       6 log.go:172] (0xc002beed10) (0xc002e22320) Create stream
I0827 01:13:51.377391       6 log.go:172] (0xc002beed10) (0xc002e22320) Stream added, broadcasting: 5
I0827 01:13:51.378203       6 log.go:172] (0xc002beed10) Reply frame received for 5
I0827 01:13:51.467071       6 log.go:172] (0xc002beed10) Data frame received for 5
I0827 01:13:51.467123       6 log.go:172] (0xc002e22320) (5) Data frame handling
I0827 01:13:51.467184       6 log.go:172] (0xc002beed10) Data frame received for 3
I0827 01:13:51.467227       6 log.go:172] (0xc001d50820) (3) Data frame handling
I0827 01:13:51.467256       6 log.go:172] (0xc001d50820) (3) Data frame sent
I0827 01:13:51.467271       6 log.go:172] (0xc002beed10) Data frame received for 3
I0827 01:13:51.467283       6 log.go:172] (0xc001d50820) (3) Data frame handling
I0827 01:13:51.468704       6 log.go:172] (0xc002beed10) Data frame received for 1
I0827 01:13:51.468798       6 log.go:172] (0xc002538960) (1) Data frame handling
I0827 01:13:51.468823       6 log.go:172] (0xc002538960) (1) Data frame sent
I0827 01:13:51.468835       6 log.go:172] (0xc002beed10) (0xc002538960) Stream removed, broadcasting: 1
I0827 01:13:51.468855       6 log.go:172] (0xc002beed10) Go away received
I0827 01:13:51.468992       6 log.go:172] (0xc002beed10) (0xc002538960) Stream removed, broadcasting: 1
I0827 01:13:51.469044       6 log.go:172] (0xc002beed10) (0xc001d50820) Stream removed, broadcasting: 3
I0827 01:13:51.469058       6 log.go:172] (0xc002beed10) (0xc002e22320) Stream removed, broadcasting: 5
Aug 27 01:13:51.469: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 27 01:13:51.469: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.469: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.502508       6 log.go:172] (0xc00180e630) (0xc002e22640) Create stream
I0827 01:13:51.502541       6 log.go:172] (0xc00180e630) (0xc002e22640) Stream added, broadcasting: 1
I0827 01:13:51.506064       6 log.go:172] (0xc00180e630) Reply frame received for 1
I0827 01:13:51.506104       6 log.go:172] (0xc00180e630) (0xc002e226e0) Create stream
I0827 01:13:51.506114       6 log.go:172] (0xc00180e630) (0xc002e226e0) Stream added, broadcasting: 3
I0827 01:13:51.506999       6 log.go:172] (0xc00180e630) Reply frame received for 3
I0827 01:13:51.507042       6 log.go:172] (0xc00180e630) (0xc002e22780) Create stream
I0827 01:13:51.507055       6 log.go:172] (0xc00180e630) (0xc002e22780) Stream added, broadcasting: 5
I0827 01:13:51.508059       6 log.go:172] (0xc00180e630) Reply frame received for 5
I0827 01:13:51.567692       6 log.go:172] (0xc00180e630) Data frame received for 5
I0827 01:13:51.567730       6 log.go:172] (0xc002e22780) (5) Data frame handling
I0827 01:13:51.567750       6 log.go:172] (0xc00180e630) Data frame received for 3
I0827 01:13:51.567758       6 log.go:172] (0xc002e226e0) (3) Data frame handling
I0827 01:13:51.567765       6 log.go:172] (0xc002e226e0) (3) Data frame sent
I0827 01:13:51.567772       6 log.go:172] (0xc00180e630) Data frame received for 3
I0827 01:13:51.567790       6 log.go:172] (0xc002e226e0) (3) Data frame handling
I0827 01:13:51.569173       6 log.go:172] (0xc00180e630) Data frame received for 1
I0827 01:13:51.569198       6 log.go:172] (0xc002e22640) (1) Data frame handling
I0827 01:13:51.569212       6 log.go:172] (0xc002e22640) (1) Data frame sent
I0827 01:13:51.569235       6 log.go:172] (0xc00180e630) (0xc002e22640) Stream removed, broadcasting: 1
I0827 01:13:51.569265       6 log.go:172] (0xc00180e630) Go away received
I0827 01:13:51.569338       6 log.go:172] (0xc00180e630) (0xc002e22640) Stream removed, broadcasting: 1
I0827 01:13:51.569349       6 log.go:172] (0xc00180e630) (0xc002e226e0) Stream removed, broadcasting: 3
I0827 01:13:51.569356       6 log.go:172] (0xc00180e630) (0xc002e22780) Stream removed, broadcasting: 5
Aug 27 01:13:51.569: INFO: Exec stderr: ""
Aug 27 01:13:51.569: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.569: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.594413       6 log.go:172] (0xc002f78c60) (0xc003031040) Create stream
I0827 01:13:51.594437       6 log.go:172] (0xc002f78c60) (0xc003031040) Stream added, broadcasting: 1
I0827 01:13:51.597266       6 log.go:172] (0xc002f78c60) Reply frame received for 1
I0827 01:13:51.597298       6 log.go:172] (0xc002f78c60) (0xc003031180) Create stream
I0827 01:13:51.597308       6 log.go:172] (0xc002f78c60) (0xc003031180) Stream added, broadcasting: 3
I0827 01:13:51.598298       6 log.go:172] (0xc002f78c60) Reply frame received for 3
I0827 01:13:51.598323       6 log.go:172] (0xc002f78c60) (0xc003031220) Create stream
I0827 01:13:51.598333       6 log.go:172] (0xc002f78c60) (0xc003031220) Stream added, broadcasting: 5
I0827 01:13:51.599520       6 log.go:172] (0xc002f78c60) Reply frame received for 5
I0827 01:13:51.656451       6 log.go:172] (0xc002f78c60) Data frame received for 5
I0827 01:13:51.656498       6 log.go:172] (0xc003031220) (5) Data frame handling
I0827 01:13:51.656526       6 log.go:172] (0xc002f78c60) Data frame received for 3
I0827 01:13:51.656542       6 log.go:172] (0xc003031180) (3) Data frame handling
I0827 01:13:51.656563       6 log.go:172] (0xc003031180) (3) Data frame sent
I0827 01:13:51.656580       6 log.go:172] (0xc002f78c60) Data frame received for 3
I0827 01:13:51.656590       6 log.go:172] (0xc003031180) (3) Data frame handling
I0827 01:13:51.657480       6 log.go:172] (0xc002f78c60) Data frame received for 1
I0827 01:13:51.657496       6 log.go:172] (0xc003031040) (1) Data frame handling
I0827 01:13:51.657508       6 log.go:172] (0xc003031040) (1) Data frame sent
I0827 01:13:51.657523       6 log.go:172] (0xc002f78c60) (0xc003031040) Stream removed, broadcasting: 1
I0827 01:13:51.657581       6 log.go:172] (0xc002f78c60) Go away received
I0827 01:13:51.657622       6 log.go:172] (0xc002f78c60) (0xc003031040) Stream removed, broadcasting: 1
I0827 01:13:51.657652       6 log.go:172] (0xc002f78c60) (0xc003031180) Stream removed, broadcasting: 3
I0827 01:13:51.657668       6 log.go:172] (0xc002f78c60) (0xc003031220) Stream removed, broadcasting: 5
Aug 27 01:13:51.657: INFO: Exec stderr: ""
Aug 27 01:13:51.657: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.657: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.702231       6 log.go:172] (0xc00180ec60) (0xc002e22aa0) Create stream
I0827 01:13:51.702260       6 log.go:172] (0xc00180ec60) (0xc002e22aa0) Stream added, broadcasting: 1
I0827 01:13:51.705126       6 log.go:172] (0xc00180ec60) Reply frame received for 1
I0827 01:13:51.705172       6 log.go:172] (0xc00180ec60) (0xc002e22d20) Create stream
I0827 01:13:51.705186       6 log.go:172] (0xc00180ec60) (0xc002e22d20) Stream added, broadcasting: 3
I0827 01:13:51.706055       6 log.go:172] (0xc00180ec60) Reply frame received for 3
I0827 01:13:51.706088       6 log.go:172] (0xc00180ec60) (0xc002e22dc0) Create stream
I0827 01:13:51.706098       6 log.go:172] (0xc00180ec60) (0xc002e22dc0) Stream added, broadcasting: 5
I0827 01:13:51.706985       6 log.go:172] (0xc00180ec60) Reply frame received for 5
I0827 01:13:51.782503       6 log.go:172] (0xc00180ec60) Data frame received for 5
I0827 01:13:51.782537       6 log.go:172] (0xc002e22dc0) (5) Data frame handling
I0827 01:13:51.782566       6 log.go:172] (0xc00180ec60) Data frame received for 3
I0827 01:13:51.782591       6 log.go:172] (0xc002e22d20) (3) Data frame handling
I0827 01:13:51.782610       6 log.go:172] (0xc002e22d20) (3) Data frame sent
I0827 01:13:51.782623       6 log.go:172] (0xc00180ec60) Data frame received for 3
I0827 01:13:51.782634       6 log.go:172] (0xc002e22d20) (3) Data frame handling
I0827 01:13:51.783751       6 log.go:172] (0xc00180ec60) Data frame received for 1
I0827 01:13:51.783784       6 log.go:172] (0xc002e22aa0) (1) Data frame handling
I0827 01:13:51.783797       6 log.go:172] (0xc002e22aa0) (1) Data frame sent
I0827 01:13:51.783818       6 log.go:172] (0xc00180ec60) (0xc002e22aa0) Stream removed, broadcasting: 1
I0827 01:13:51.783846       6 log.go:172] (0xc00180ec60) Go away received
I0827 01:13:51.783977       6 log.go:172] (0xc00180ec60) (0xc002e22aa0) Stream removed, broadcasting: 1
I0827 01:13:51.784031       6 log.go:172] (0xc00180ec60) (0xc002e22d20) Stream removed, broadcasting: 3
I0827 01:13:51.784058       6 log.go:172] (0xc00180ec60) (0xc002e22dc0) Stream removed, broadcasting: 5
Aug 27 01:13:51.784: INFO: Exec stderr: ""
Aug 27 01:13:51.784: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9134 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:13:51.784: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:13:51.813207       6 log.go:172] (0xc00180f290) (0xc002e23040) Create stream
I0827 01:13:51.813242       6 log.go:172] (0xc00180f290) (0xc002e23040) Stream added, broadcasting: 1
I0827 01:13:51.815822       6 log.go:172] (0xc00180f290) Reply frame received for 1
I0827 01:13:51.815859       6 log.go:172] (0xc00180f290) (0xc002e23220) Create stream
I0827 01:13:51.815873       6 log.go:172] (0xc00180f290) (0xc002e23220) Stream added, broadcasting: 3
I0827 01:13:51.817035       6 log.go:172] (0xc00180f290) Reply frame received for 3
I0827 01:13:51.817072       6 log.go:172] (0xc00180f290) (0xc002538aa0) Create stream
I0827 01:13:51.817084       6 log.go:172] (0xc00180f290) (0xc002538aa0) Stream added, broadcasting: 5
I0827 01:13:51.817932       6 log.go:172] (0xc00180f290) Reply frame received for 5
I0827 01:13:51.885823       6 log.go:172] (0xc00180f290) Data frame received for 5
I0827 01:13:51.885865       6 log.go:172] (0xc002538aa0) (5) Data frame handling
I0827 01:13:51.885894       6 log.go:172] (0xc00180f290) Data frame received for 3
I0827 01:13:51.885914       6 log.go:172] (0xc002e23220) (3) Data frame handling
I0827 01:13:51.885959       6 log.go:172] (0xc002e23220) (3) Data frame sent
I0827 01:13:51.885981       6 log.go:172] (0xc00180f290) Data frame received for 3
I0827 01:13:51.885996       6 log.go:172] (0xc002e23220) (3) Data frame handling
I0827 01:13:51.887179       6 log.go:172] (0xc00180f290) Data frame received for 1
I0827 01:13:51.887216       6 log.go:172] (0xc002e23040) (1) Data frame handling
I0827 01:13:51.887245       6 log.go:172] (0xc002e23040) (1) Data frame sent
I0827 01:13:51.887267       6 log.go:172] (0xc00180f290) (0xc002e23040) Stream removed, broadcasting: 1
I0827 01:13:51.887283       6 log.go:172] (0xc00180f290) Go away received
I0827 01:13:51.887386       6 log.go:172] (0xc00180f290) (0xc002e23040) Stream removed, broadcasting: 1
I0827 01:13:51.887403       6 log.go:172] (0xc00180f290) (0xc002e23220) Stream removed, broadcasting: 3
I0827 01:13:51.887410       6 log.go:172] (0xc00180f290) (0xc002538aa0) Stream removed, broadcasting: 5
Aug 27 01:13:51.887: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:13:51.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9134" for this suite.

• [SLOW TEST:13.256 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":533,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:13:51.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 27 01:13:51.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 01:13:51.962: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 01:13:51.965: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 27 01:13:51.972: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.972: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 01:13:51.972: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.972: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 01:13:51.972: INFO: rally-322c97bd-r82sc9fu from c-rally-322c97bd-9w2ekw35 started at 2020-08-27 01:13:22 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.972: INFO: 	Container rally-322c97bd-r82sc9fu ready: true, restart count 0
Aug 27 01:13:51.972: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.972: INFO: 	Container app ready: true, restart count 0
Aug 27 01:13:51.972: INFO: test-pod from e2e-kubelet-etc-hosts-9134 started at 2020-08-27 01:13:38 +0000 UTC (3 container statuses recorded)
Aug 27 01:13:51.972: INFO: 	Container busybox-1 ready: true, restart count 0
Aug 27 01:13:51.972: INFO: 	Container busybox-2 ready: true, restart count 0
Aug 27 01:13:51.972: INFO: 	Container busybox-3 ready: true, restart count 0
Aug 27 01:13:51.972: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 27 01:13:51.980: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.980: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 01:13:51.980: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.980: INFO: 	Container app ready: true, restart count 0
Aug 27 01:13:51.980: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.980: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 01:13:51.980: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 27 01:13:51.980: INFO: 	Container httpd ready: true, restart count 0
Aug 27 01:13:51.980: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-9134 started at 2020-08-27 01:13:46 +0000 UTC (2 container statuses recorded)
Aug 27 01:13:51.980: INFO: 	Container busybox-1 ready: true, restart count 0
Aug 27 01:13:51.980: INFO: 	Container busybox-2 ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-57480c77-fc64-4052-9e87-8ce4256ebe9d 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-57480c77-fc64-4052-9e87-8ce4256ebe9d off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-57480c77-fc64-4052-9e87-8ce4256ebe9d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:11.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5471" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:19.214 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":34,"skipped":544,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:11.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-68bb9a67-3709-47e0-a4c4-1d280160216c
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-68bb9a67-3709-47e0-a4c4-1d280160216c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:17.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5359" for this suite.

• [SLOW TEST:6.547 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":569,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:17.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 27 01:14:18.324: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077338 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 01:14:18.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077339 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 27 01:14:18.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077340 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 27 01:14:28.914: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077413 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 01:14:28.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077414 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 27 01:14:28.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3066 /api/v1/namespaces/watch-3066/configmaps/e2e-watch-test-label-changed 46569006-90c1-4972-9fe1-fc1ae2e5adeb 4077417 0 2020-08-27 01:14:18 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:28.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3066" for this suite.

• [SLOW TEST:11.506 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":36,"skipped":616,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:29.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:14:29.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919" in namespace "projected-4436" to be "success or failure"
Aug 27 01:14:29.597: INFO: Pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919": Phase="Pending", Reason="", readiness=false. Elapsed: 243.392303ms
Aug 27 01:14:31.605: INFO: Pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251609347s
Aug 27 01:14:33.764: INFO: Pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410160533s
Aug 27 01:14:36.070: INFO: Pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.71668913s
STEP: Saw pod success
Aug 27 01:14:36.070: INFO: Pod "downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919" satisfied condition "success or failure"
Aug 27 01:14:36.159: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919 container client-container: 
STEP: delete the pod
Aug 27 01:14:36.298: INFO: Waiting for pod downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919 to disappear
Aug 27 01:14:36.302: INFO: Pod downwardapi-volume-7689e858-3945-4e03-bd3a-4ca088674919 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:36.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4436" for this suite.

• [SLOW TEST:7.145 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":621,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:36.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 27 01:14:36.455: INFO: Waiting up to 5m0s for pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d" in namespace "emptydir-6854" to be "success or failure"
Aug 27 01:14:36.502: INFO: Pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.557781ms
Aug 27 01:14:38.740: INFO: Pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285083432s
Aug 27 01:14:41.195: INFO: Pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.740126199s
Aug 27 01:14:43.199: INFO: Pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.744374581s
STEP: Saw pod success
Aug 27 01:14:43.199: INFO: Pod "pod-32b05593-0d47-4dc3-97b7-37378511a70d" satisfied condition "success or failure"
Aug 27 01:14:43.202: INFO: Trying to get logs from node jerma-worker2 pod pod-32b05593-0d47-4dc3-97b7-37378511a70d container test-container: 
STEP: delete the pod
Aug 27 01:14:43.248: INFO: Waiting for pod pod-32b05593-0d47-4dc3-97b7-37378511a70d to disappear
Aug 27 01:14:43.258: INFO: Pod pod-32b05593-0d47-4dc3-97b7-37378511a70d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:43.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6854" for this suite.

• [SLOW TEST:6.954 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":652,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:43.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 27 01:14:43.489: INFO: Created pod &Pod{ObjectMeta:{dns-3417  dns-3417 /api/v1/namespaces/dns-3417/pods/dns-3417 49292dcd-e8a7-4298-aa0b-d6478e362805 4077570 0 2020-08-27 01:14:43 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zr9bl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zr9bl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zr9bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 27 01:14:47.521: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3417 PodName:dns-3417 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:14:47.521: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:14:47.551844       6 log.go:172] (0xc002a4ae70) (0xc001256aa0) Create stream
I0827 01:14:47.551872       6 log.go:172] (0xc002a4ae70) (0xc001256aa0) Stream added, broadcasting: 1
I0827 01:14:47.553916       6 log.go:172] (0xc002a4ae70) Reply frame received for 1
I0827 01:14:47.553975       6 log.go:172] (0xc002a4ae70) (0xc001256b40) Create stream
I0827 01:14:47.553987       6 log.go:172] (0xc002a4ae70) (0xc001256b40) Stream added, broadcasting: 3
I0827 01:14:47.554753       6 log.go:172] (0xc002a4ae70) Reply frame received for 3
I0827 01:14:47.554786       6 log.go:172] (0xc002a4ae70) (0xc0030312c0) Create stream
I0827 01:14:47.554796       6 log.go:172] (0xc002a4ae70) (0xc0030312c0) Stream added, broadcasting: 5
I0827 01:14:47.555437       6 log.go:172] (0xc002a4ae70) Reply frame received for 5
I0827 01:14:47.628142       6 log.go:172] (0xc002a4ae70) Data frame received for 3
I0827 01:14:47.628174       6 log.go:172] (0xc001256b40) (3) Data frame handling
I0827 01:14:47.628193       6 log.go:172] (0xc001256b40) (3) Data frame sent
I0827 01:14:47.630985       6 log.go:172] (0xc002a4ae70) Data frame received for 3
I0827 01:14:47.631010       6 log.go:172] (0xc001256b40) (3) Data frame handling
I0827 01:14:47.631027       6 log.go:172] (0xc002a4ae70) Data frame received for 5
I0827 01:14:47.631037       6 log.go:172] (0xc0030312c0) (5) Data frame handling
I0827 01:14:47.632663       6 log.go:172] (0xc002a4ae70) Data frame received for 1
I0827 01:14:47.632685       6 log.go:172] (0xc001256aa0) (1) Data frame handling
I0827 01:14:47.632698       6 log.go:172] (0xc001256aa0) (1) Data frame sent
I0827 01:14:47.632719       6 log.go:172] (0xc002a4ae70) (0xc001256aa0) Stream removed, broadcasting: 1
I0827 01:14:47.632868       6 log.go:172] (0xc002a4ae70) Go away received
I0827 01:14:47.632952       6 log.go:172] (0xc002a4ae70) (0xc001256aa0) Stream removed, broadcasting: 1
I0827 01:14:47.632977       6 log.go:172] (0xc002a4ae70) (0xc001256b40) Stream removed, broadcasting: 3
I0827 01:14:47.632988       6 log.go:172] (0xc002a4ae70) (0xc0030312c0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 27 01:14:47.633: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3417 PodName:dns-3417 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:14:47.633: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:14:47.661689       6 log.go:172] (0xc002a4b4a0) (0xc0012574a0) Create stream
I0827 01:14:47.661722       6 log.go:172] (0xc002a4b4a0) (0xc0012574a0) Stream added, broadcasting: 1
I0827 01:14:47.664065       6 log.go:172] (0xc002a4b4a0) Reply frame received for 1
I0827 01:14:47.664117       6 log.go:172] (0xc002a4b4a0) (0xc001257720) Create stream
I0827 01:14:47.664131       6 log.go:172] (0xc002a4b4a0) (0xc001257720) Stream added, broadcasting: 3
I0827 01:14:47.665198       6 log.go:172] (0xc002a4b4a0) Reply frame received for 3
I0827 01:14:47.665247       6 log.go:172] (0xc002a4b4a0) (0xc00120c1e0) Create stream
I0827 01:14:47.665268       6 log.go:172] (0xc002a4b4a0) (0xc00120c1e0) Stream added, broadcasting: 5
I0827 01:14:47.667986       6 log.go:172] (0xc002a4b4a0) Reply frame received for 5
I0827 01:14:47.721468       6 log.go:172] (0xc002a4b4a0) Data frame received for 3
I0827 01:14:47.721496       6 log.go:172] (0xc001257720) (3) Data frame handling
I0827 01:14:47.721521       6 log.go:172] (0xc001257720) (3) Data frame sent
I0827 01:14:47.724361       6 log.go:172] (0xc002a4b4a0) Data frame received for 5
I0827 01:14:47.724394       6 log.go:172] (0xc00120c1e0) (5) Data frame handling
I0827 01:14:47.724415       6 log.go:172] (0xc002a4b4a0) Data frame received for 3
I0827 01:14:47.724423       6 log.go:172] (0xc001257720) (3) Data frame handling
I0827 01:14:47.725877       6 log.go:172] (0xc002a4b4a0) Data frame received for 1
I0827 01:14:47.725906       6 log.go:172] (0xc0012574a0) (1) Data frame handling
I0827 01:14:47.725922       6 log.go:172] (0xc0012574a0) (1) Data frame sent
I0827 01:14:47.725938       6 log.go:172] (0xc002a4b4a0) (0xc0012574a0) Stream removed, broadcasting: 1
I0827 01:14:47.725958       6 log.go:172] (0xc002a4b4a0) Go away received
I0827 01:14:47.726080       6 log.go:172] (0xc002a4b4a0) (0xc0012574a0) Stream removed, broadcasting: 1
I0827 01:14:47.726108       6 log.go:172] (0xc002a4b4a0) (0xc001257720) Stream removed, broadcasting: 3
I0827 01:14:47.726126       6 log.go:172] (0xc002a4b4a0) (0xc00120c1e0) Stream removed, broadcasting: 5
Aug 27 01:14:47.726: INFO: Deleting pod dns-3417...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:14:47.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3417" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":39,"skipped":665,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:14:47.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:15:16.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9931" for this suite.
STEP: Destroying namespace "nsdeletetest-3720" for this suite.
Aug 27 01:15:16.447: INFO: Namespace nsdeletetest-3720 was already deleted
STEP: Destroying namespace "nsdeletetest-2" for this suite.

• [SLOW TEST:28.521 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":40,"skipped":672,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:15:16.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-cef6056c-adf7-41e1-a648-809f8638b66b
STEP: Creating a pod to test consume configMaps
Aug 27 01:15:16.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24" in namespace "projected-2663" to be "success or failure"
Aug 27 01:15:16.930: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24": Phase="Pending", Reason="", readiness=false. Elapsed: 252.56231ms
Aug 27 01:15:18.934: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256479648s
Aug 27 01:15:21.502: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824159732s
Aug 27 01:15:24.010: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24": Phase="Pending", Reason="", readiness=false. Elapsed: 7.332312261s
Aug 27 01:15:26.013: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.335939262s
STEP: Saw pod success
Aug 27 01:15:26.013: INFO: Pod "pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24" satisfied condition "success or failure"
Aug 27 01:15:26.016: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 01:15:26.422: INFO: Waiting for pod pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24 to disappear
Aug 27 01:15:26.960: INFO: Pod pod-projected-configmaps-a8ab983a-116d-433c-b9bc-ebb413e49b24 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:15:26.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2663" for this suite.

• [SLOW TEST:11.088 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:15:27.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1939
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-1939
I0827 01:15:28.298031       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1939, replica count: 2
I0827 01:15:31.348476       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:15:34.348659       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:15:37.348836       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 01:15:37.348: INFO: Creating new exec pod
Aug 27 01:15:49.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1939 execpod27jtc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 27 01:15:50.008: INFO: stderr: "I0827 01:15:49.930676     468 log.go:172] (0xc000106bb0) (0xc0006948c0) Create stream\nI0827 01:15:49.930732     468 log.go:172] (0xc000106bb0) (0xc0006948c0) Stream added, broadcasting: 1\nI0827 01:15:49.933173     468 log.go:172] (0xc000106bb0) Reply frame received for 1\nI0827 01:15:49.933229     468 log.go:172] (0xc000106bb0) (0xc000457680) Create stream\nI0827 01:15:49.933248     468 log.go:172] (0xc000106bb0) (0xc000457680) Stream added, broadcasting: 3\nI0827 01:15:49.933911     468 log.go:172] (0xc000106bb0) Reply frame received for 3\nI0827 01:15:49.933936     468 log.go:172] (0xc000106bb0) (0xc000940000) Create stream\nI0827 01:15:49.933943     468 log.go:172] (0xc000106bb0) (0xc000940000) Stream added, broadcasting: 5\nI0827 01:15:49.934635     468 log.go:172] (0xc000106bb0) Reply frame received for 5\nI0827 01:15:49.995068     468 log.go:172] (0xc000106bb0) Data frame received for 5\nI0827 01:15:49.995091     468 log.go:172] (0xc000940000) (5) Data frame handling\nI0827 01:15:49.995102     468 log.go:172] (0xc000940000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0827 01:15:49.995936     468 log.go:172] (0xc000106bb0) Data frame received for 5\nI0827 01:15:49.995959     468 log.go:172] (0xc000940000) (5) Data frame handling\nI0827 01:15:49.995986     468 log.go:172] (0xc000940000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0827 01:15:49.996245     468 log.go:172] (0xc000106bb0) Data frame received for 3\nI0827 01:15:49.996261     468 log.go:172] (0xc000457680) (3) Data frame handling\nI0827 01:15:49.996327     468 log.go:172] (0xc000106bb0) Data frame received for 5\nI0827 01:15:49.996339     468 log.go:172] (0xc000940000) (5) Data frame handling\nI0827 01:15:49.997689     468 log.go:172] (0xc000106bb0) Data frame received for 1\nI0827 01:15:49.997714     468 log.go:172] (0xc0006948c0) (1) Data frame handling\nI0827 01:15:49.997724     468 log.go:172] (0xc0006948c0) (1) Data frame sent\nI0827 01:15:49.997733     468 log.go:172] (0xc000106bb0) (0xc0006948c0) Stream removed, broadcasting: 1\nI0827 01:15:49.997825     468 log.go:172] (0xc000106bb0) Go away received\nI0827 01:15:49.998061     468 log.go:172] (0xc000106bb0) (0xc0006948c0) Stream removed, broadcasting: 1\nI0827 01:15:49.998075     468 log.go:172] (0xc000106bb0) (0xc000457680) Stream removed, broadcasting: 3\nI0827 01:15:49.998080     468 log.go:172] (0xc000106bb0) (0xc000940000) Stream removed, broadcasting: 5\n"
Aug 27 01:15:50.008: INFO: stdout: ""
Aug 27 01:15:50.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1939 execpod27jtc -- /bin/sh -x -c nc -zv -t -w 2 10.100.146.88 80'
Aug 27 01:15:50.228: INFO: stderr: "I0827 01:15:50.166485     490 log.go:172] (0xc0005c4d10) (0xc0005e6000) Create stream\nI0827 01:15:50.166557     490 log.go:172] (0xc0005c4d10) (0xc0005e6000) Stream added, broadcasting: 1\nI0827 01:15:50.168500     490 log.go:172] (0xc0005c4d10) Reply frame received for 1\nI0827 01:15:50.168550     490 log.go:172] (0xc0005c4d10) (0xc0005e60a0) Create stream\nI0827 01:15:50.168566     490 log.go:172] (0xc0005c4d10) (0xc0005e60a0) Stream added, broadcasting: 3\nI0827 01:15:50.169590     490 log.go:172] (0xc0005c4d10) Reply frame received for 3\nI0827 01:15:50.169621     490 log.go:172] (0xc0005c4d10) (0xc0005e61e0) Create stream\nI0827 01:15:50.169630     490 log.go:172] (0xc0005c4d10) (0xc0005e61e0) Stream added, broadcasting: 5\nI0827 01:15:50.170326     490 log.go:172] (0xc0005c4d10) Reply frame received for 5\nI0827 01:15:50.217768     490 log.go:172] (0xc0005c4d10) Data frame received for 5\nI0827 01:15:50.217804     490 log.go:172] (0xc0005e61e0) (5) Data frame handling\nI0827 01:15:50.217819     490 log.go:172] (0xc0005e61e0) (5) Data frame sent\nI0827 01:15:50.217831     490 log.go:172] (0xc0005c4d10) Data frame received for 5\nI0827 01:15:50.217841     490 log.go:172] (0xc0005e61e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.146.88 80\nConnection to 10.100.146.88 80 port [tcp/http] succeeded!\nI0827 01:15:50.217870     490 log.go:172] (0xc0005c4d10) Data frame received for 3\nI0827 01:15:50.217883     490 log.go:172] (0xc0005e60a0) (3) Data frame handling\nI0827 01:15:50.219827     490 log.go:172] (0xc0005c4d10) Data frame received for 1\nI0827 01:15:50.219848     490 log.go:172] (0xc0005e6000) (1) Data frame handling\nI0827 01:15:50.219857     490 log.go:172] (0xc0005e6000) (1) Data frame sent\nI0827 01:15:50.219865     490 log.go:172] (0xc0005c4d10) (0xc0005e6000) Stream removed, broadcasting: 1\nI0827 01:15:50.219878     490 log.go:172] (0xc0005c4d10) Go away received\nI0827 01:15:50.220210     490 log.go:172] (0xc0005c4d10) (0xc0005e6000) Stream removed, broadcasting: 1\nI0827 01:15:50.220225     490 log.go:172] (0xc0005c4d10) (0xc0005e60a0) Stream removed, broadcasting: 3\nI0827 01:15:50.220230     490 log.go:172] (0xc0005c4d10) (0xc0005e61e0) Stream removed, broadcasting: 5\n"
Aug 27 01:15:50.228: INFO: stdout: ""
Aug 27 01:15:50.228: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:15:50.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1939" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.725 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":42,"skipped":711,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:15:51.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:15:52.168: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d" in namespace "security-context-test-1838" to be "success or failure"
Aug 27 01:15:52.543: INFO: Pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d": Phase="Pending", Reason="", readiness=false. Elapsed: 375.397628ms
Aug 27 01:15:54.836: INFO: Pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668406994s
Aug 27 01:15:57.549: INFO: Pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d": Phase="Running", Reason="", readiness=true. Elapsed: 5.381238631s
Aug 27 01:15:59.593: INFO: Pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.425680807s
Aug 27 01:15:59.593: INFO: Pod "alpine-nnp-false-524eccf5-bb89-4e21-af61-e222c373c86d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:15:59.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1838" for this suite.

• [SLOW TEST:9.007 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":734,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:00.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-bfe05c1c-1076-40ba-9a9f-9a99a56e64c5
STEP: Creating a pod to test consume secrets
Aug 27 01:16:03.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137" in namespace "projected-5726" to be "success or failure"
Aug 27 01:16:03.254: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137": Phase="Pending", Reason="", readiness=false. Elapsed: 147.823985ms
Aug 27 01:16:05.675: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568532527s
Aug 27 01:16:07.683: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576002562s
Aug 27 01:16:09.849: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742158552s
Aug 27 01:16:11.853: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.746282068s
STEP: Saw pod success
Aug 27 01:16:11.853: INFO: Pod "pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137" satisfied condition "success or failure"
Aug 27 01:16:11.855: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137 container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 01:16:11.987: INFO: Waiting for pod pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137 to disappear
Aug 27 01:16:12.005: INFO: Pod pod-projected-secrets-63667afd-5e5c-4bbb-b9e7-41dce8d6a137 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:16:12.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5726" for this suite.

• [SLOW TEST:11.733 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":750,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:12.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Aug 27 01:16:12.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4721'
Aug 27 01:16:12.611: INFO: stderr: ""
Aug 27 01:16:12.611: INFO: stdout: "pod/pause created\n"
Aug 27 01:16:12.611: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 27 01:16:12.612: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4721" to be "running and ready"
Aug 27 01:16:12.771: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 159.336015ms
Aug 27 01:16:14.933: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321335476s
Aug 27 01:16:16.950: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338190211s
Aug 27 01:16:18.953: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.341876576s
Aug 27 01:16:18.953: INFO: Pod "pause" satisfied condition "running and ready"
Aug 27 01:16:18.953: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 27 01:16:18.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4721'
Aug 27 01:16:19.191: INFO: stderr: ""
Aug 27 01:16:19.191: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 27 01:16:19.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4721'
Aug 27 01:16:19.398: INFO: stderr: ""
Aug 27 01:16:19.398: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 27 01:16:19.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4721'
Aug 27 01:16:19.499: INFO: stderr: ""
Aug 27 01:16:19.499: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 27 01:16:19.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4721'
Aug 27 01:16:19.623: INFO: stderr: ""
Aug 27 01:16:19.623: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Aug 27 01:16:19.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4721'
Aug 27 01:16:20.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:16:20.165: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 27 01:16:20.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4721'
Aug 27 01:16:20.375: INFO: stderr: "No resources found in kubectl-4721 namespace.\n"
Aug 27 01:16:20.375: INFO: stdout: ""
Aug 27 01:16:20.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4721 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 01:16:20.473: INFO: stderr: ""
Aug 27 01:16:20.473: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:16:20.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4721" for this suite.

• [SLOW TEST:8.468 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":45,"skipped":776,"failed":0}
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:20.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-210f6647-2635-4289-af57-b3fb696e90fa
STEP: Creating secret with name secret-projected-all-test-volume-e8d55787-acf2-456d-a45b-e2692c28c420
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 27 01:16:20.778: INFO: Waiting up to 5m0s for pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e" in namespace "projected-8295" to be "success or failure"
Aug 27 01:16:20.974: INFO: Pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e": Phase="Pending", Reason="", readiness=false. Elapsed: 196.064953ms
Aug 27 01:16:22.978: INFO: Pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199596631s
Aug 27 01:16:25.118: INFO: Pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339846069s
Aug 27 01:16:27.122: INFO: Pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.344149246s
STEP: Saw pod success
Aug 27 01:16:27.123: INFO: Pod "projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e" satisfied condition "success or failure"
Aug 27 01:16:27.126: INFO: Trying to get logs from node jerma-worker pod projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e container projected-all-volume-test: 
STEP: delete the pod
Aug 27 01:16:27.191: INFO: Waiting for pod projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e to disappear
Aug 27 01:16:27.202: INFO: Pod projected-volume-b34cc31e-2198-461e-8536-ff2921ad304e no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:16:27.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8295" for this suite.

• [SLOW TEST:6.730 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":46,"skipped":776,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:27.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:16:44.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9382" for this suite.

• [SLOW TEST:17.815 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":47,"skipped":786,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:45.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:16:45.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5" in namespace "downward-api-2626" to be "success or failure"
Aug 27 01:16:45.550: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5": Phase="Pending", Reason="", readiness=false. Elapsed: 64.181048ms
Aug 27 01:16:47.627: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141112761s
Aug 27 01:16:49.765: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279515682s
Aug 27 01:16:51.831: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344937285s
Aug 27 01:16:53.939: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.453097683s
STEP: Saw pod success
Aug 27 01:16:53.939: INFO: Pod "downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5" satisfied condition "success or failure"
Aug 27 01:16:54.179: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5 container client-container: 
STEP: delete the pod
Aug 27 01:16:54.610: INFO: Waiting for pod downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5 to disappear
Aug 27 01:16:54.621: INFO: Pod downwardapi-volume-a9a27be3-1c2a-4a3a-98da-c5c6024910f5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:16:54.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2626" for this suite.

• [SLOW TEST:9.606 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":801,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:16:54.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:16:54.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:01.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4006" for this suite.

• [SLOW TEST:6.923 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":820,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:01.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:17:01.851: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"34424db3-709e-469d-87d6-bc73662aeb48", Controller:(*bool)(0xc0003f1612), BlockOwnerDeletion:(*bool)(0xc0003f1613)}}
Aug 27 01:17:01.921: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"accdb367-a2e9-4a23-a8a5-b51ec16813b5", Controller:(*bool)(0xc000e1b58a), BlockOwnerDeletion:(*bool)(0xc000e1b58b)}}
Aug 27 01:17:01.940: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c2643b81-e2de-4cf8-8cd1-b8206fbaef9c", Controller:(*bool)(0xc002eb52ea), BlockOwnerDeletion:(*bool)(0xc002eb52eb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2848" for this suite.

• [SLOW TEST:5.529 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":50,"skipped":844,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:07.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 01:17:07.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7333'
Aug 27 01:17:07.828: INFO: stderr: ""
Aug 27 01:17:07.828: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 27 01:17:12.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7333 -o json'
Aug 27 01:17:12.980: INFO: stderr: ""
Aug 27 01:17:12.980: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-27T01:17:07Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7333\",\n        \"resourceVersion\": \"4078560\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7333/pods/e2e-test-httpd-pod\",\n        \"uid\": \"69e80df1-85af-4c73-b498-51150d184b0d\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-b6mfz\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-b6mfz\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-b6mfz\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T01:17:07Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T01:17:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T01:17:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-27T01:17:07Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://4a76e4e11df578b67cd27c4fe175f07251db3391d784f60ec74b08665fa0e568\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-27T01:17:10Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.118\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.118\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-27T01:17:07Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 27 01:17:12.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7333'
Aug 27 01:17:13.375: INFO: stderr: ""
Aug 27 01:17:13.375: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Aug 27 01:17:13.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7333'
Aug 27 01:17:20.184: INFO: stderr: ""
Aug 27 01:17:20.184: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:20.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7333" for this suite.

• [SLOW TEST:13.245 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":51,"skipped":878,"failed":0}
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:20.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0827 01:17:31.489716       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 01:17:31.489: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5182" for this suite.

• [SLOW TEST:11.167 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":52,"skipped":878,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:31.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Aug 27 01:17:31.659: INFO: Waiting up to 5m0s for pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d" in namespace "containers-7955" to be "success or failure"
Aug 27 01:17:31.694: INFO: Pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.962625ms
Aug 27 01:17:33.699: INFO: Pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03921501s
Aug 27 01:17:35.719: INFO: Pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059379021s
Aug 27 01:17:37.867: INFO: Pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207831941s
STEP: Saw pod success
Aug 27 01:17:37.867: INFO: Pod "client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d" satisfied condition "success or failure"
Aug 27 01:17:37.871: INFO: Trying to get logs from node jerma-worker pod client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d container test-container: 
STEP: delete the pod
Aug 27 01:17:38.343: INFO: Waiting for pod client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d to disappear
Aug 27 01:17:38.460: INFO: Pod client-containers-9f6362bc-e412-43ef-988d-d5e4de9bf78d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:38.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7955" for this suite.

• [SLOW TEST:7.335 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":883,"failed":0}
SSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:38.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:39.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9045" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":54,"skipped":887,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:39.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-15f17a77-3a9f-45d0-a6c6-07d7fa8cd9b4
STEP: Creating a pod to test consume configMaps
Aug 27 01:17:39.813: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32" in namespace "projected-1231" to be "success or failure"
Aug 27 01:17:40.259: INFO: Pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32": Phase="Pending", Reason="", readiness=false. Elapsed: 445.506635ms
Aug 27 01:17:42.262: INFO: Pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44902897s
Aug 27 01:17:45.017: INFO: Pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.203759428s
Aug 27 01:17:47.021: INFO: Pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.207362438s
STEP: Saw pod success
Aug 27 01:17:47.021: INFO: Pod "pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32" satisfied condition "success or failure"
Aug 27 01:17:47.022: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 01:17:47.056: INFO: Waiting for pod pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32 to disappear
Aug 27 01:17:47.075: INFO: Pod pod-projected-configmaps-becf7722-1c44-47ef-a120-4cf877fe8f32 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:47.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1231" for this suite.

• [SLOW TEST:7.671 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":893,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:47.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:17:47.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3" in namespace "projected-488" to be "success or failure"
Aug 27 01:17:47.307: INFO: Pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.296707ms
Aug 27 01:17:49.310: INFO: Pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049671022s
Aug 27 01:17:51.314: INFO: Pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3": Phase="Running", Reason="", readiness=true. Elapsed: 4.053160007s
Aug 27 01:17:53.317: INFO: Pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056946423s
STEP: Saw pod success
Aug 27 01:17:53.317: INFO: Pod "downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3" satisfied condition "success or failure"
Aug 27 01:17:53.320: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3 container client-container: 
STEP: delete the pod
Aug 27 01:17:53.343: INFO: Waiting for pod downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3 to disappear
Aug 27 01:17:53.351: INFO: Pod downwardapi-volume-f09c4ce5-ffe9-4ebd-bf16-8c622a6b40b3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-488" for this suite.

• [SLOW TEST:6.259 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:53.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1ed90bf0-9f51-4db2-ba3a-44d195c8b77b
STEP: Creating a pod to test consume secrets
Aug 27 01:17:53.464: INFO: Waiting up to 5m0s for pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1" in namespace "secrets-1377" to be "success or failure"
Aug 27 01:17:53.480: INFO: Pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.153812ms
Aug 27 01:17:55.486: INFO: Pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021281242s
Aug 27 01:17:57.490: INFO: Pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025630612s
Aug 27 01:17:59.495: INFO: Pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030389793s
STEP: Saw pod success
Aug 27 01:17:59.495: INFO: Pod "pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1" satisfied condition "success or failure"
Aug 27 01:17:59.497: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1 container secret-volume-test: 
STEP: delete the pod
Aug 27 01:17:59.557: INFO: Waiting for pod pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1 to disappear
Aug 27 01:17:59.570: INFO: Pod pod-secrets-1485a4ba-b4f4-434c-b62a-bb880a4289d1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:17:59.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1377" for this suite.

• [SLOW TEST:6.219 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":922,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:17:59.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-9634
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9634 to expose endpoints map[]
Aug 27 01:17:59.786: INFO: Get endpoints failed (12.055681ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 27 01:18:00.843: INFO: successfully validated that service endpoint-test2 in namespace services-9634 exposes endpoints map[] (1.069174534s elapsed)
STEP: Creating pod pod1 in namespace services-9634
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9634 to expose endpoints map[pod1:[80]]
Aug 27 01:18:04.115: INFO: successfully validated that service endpoint-test2 in namespace services-9634 exposes endpoints map[pod1:[80]] (3.265889291s elapsed)
STEP: Creating pod pod2 in namespace services-9634
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9634 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 27 01:18:08.392: INFO: successfully validated that service endpoint-test2 in namespace services-9634 exposes endpoints map[pod1:[80] pod2:[80]] (4.271636844s elapsed)
STEP: Deleting pod pod1 in namespace services-9634
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9634 to expose endpoints map[pod2:[80]]
Aug 27 01:18:09.532: INFO: successfully validated that service endpoint-test2 in namespace services-9634 exposes endpoints map[pod2:[80]] (1.135004337s elapsed)
STEP: Deleting pod pod2 in namespace services-9634
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9634 to expose endpoints map[]
Aug 27 01:18:10.550: INFO: successfully validated that service endpoint-test2 in namespace services-9634 exposes endpoints map[] (1.013913204s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:18:10.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9634" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.345 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":58,"skipped":942,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:18:10.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-127a32f8-e68d-488d-910a-173de53bdee6
STEP: Creating a pod to test consume secrets
Aug 27 01:18:11.153: INFO: Waiting up to 5m0s for pod "pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba" in namespace "secrets-4287" to be "success or failure"
Aug 27 01:18:11.157: INFO: Pod "pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007278ms
Aug 27 01:18:13.272: INFO: Pod "pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118963492s
Aug 27 01:18:15.275: INFO: Pod "pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122155752s
STEP: Saw pod success
Aug 27 01:18:15.275: INFO: Pod "pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba" satisfied condition "success or failure"
Aug 27 01:18:15.278: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba container secret-volume-test: 
STEP: delete the pod
Aug 27 01:18:15.328: INFO: Waiting for pod pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba to disappear
Aug 27 01:18:15.358: INFO: Pod pod-secrets-0b40c08f-3bb0-4f4f-8965-2cd99bb3f0ba no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:18:15.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4287" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":986,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:18:15.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4863
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4863
STEP: Creating statefulset with conflicting port in namespace statefulset-4863
STEP: Waiting until pod test-pod will start running in namespace statefulset-4863
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4863
Aug 27 01:18:19.701: INFO: Observed stateful pod in namespace: statefulset-4863, name: ss-0, uid: c5b640f9-f9d0-402b-9e65-3c9b1bbf5749, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 01:18:19.714: INFO: Observed stateful pod in namespace: statefulset-4863, name: ss-0, uid: c5b640f9-f9d0-402b-9e65-3c9b1bbf5749, status phase: Failed. Waiting for statefulset controller to delete.
Aug 27 01:18:19.752: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4863
STEP: Removing pod with conflicting port in namespace statefulset-4863
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4863 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 01:18:26.135: INFO: Deleting all statefulset in ns statefulset-4863
Aug 27 01:18:26.137: INFO: Scaling statefulset ss to 0
Aug 27 01:18:36.197: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 01:18:36.199: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:18:36.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4863" for this suite.

• [SLOW TEST:20.899 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":60,"skipped":991,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:18:36.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 27 01:18:36.345: INFO: PodSpec: initContainers in spec.initContainers
Aug 27 01:19:34.209: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fdad7f1d-7ab3-440f-aea0-178360d131cc", GenerateName:"", Namespace:"init-container-8074", SelfLink:"/api/v1/namespaces/init-container-8074/pods/pod-init-fdad7f1d-7ab3-440f-aea0-178360d131cc", UID:"95a167aa-d8a4-4afe-a4bc-9bb4e5b8e364", ResourceVersion:"4079594", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734087916, loc:(*time.Location)(0x7931640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"345615472"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8mdvq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0008be600), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mdvq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mdvq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mdvq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e1a4f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00315e060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e1a750)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e1a780)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e1a788), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e1a78c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087916, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087916, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087916, loc:(*time.Location)(0x7931640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087916, loc:(*time.Location)(0x7931640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.109", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.109"}}, StartTime:(*v1.Time)(0xc002c54360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002c54460), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006b0310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b5f5ad002969c138c1ab802e8f3f0c773d4bdb5059c259f386ffd5cedf1ac3d3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c544e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c54400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000e1a84f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:19:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8074" for this suite.

• [SLOW TEST:58.020 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":61,"skipped":1007,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:19:34.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Aug 27 01:19:35.193: INFO: Waiting up to 5m0s for pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b" in namespace "var-expansion-9990" to be "success or failure"
Aug 27 01:19:35.329: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 136.616191ms
Aug 27 01:19:37.333: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140072488s
Aug 27 01:19:39.395: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202428137s
Aug 27 01:19:41.485: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b": Phase="Running", Reason="", readiness=true. Elapsed: 6.292484546s
Aug 27 01:19:43.617: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.42408587s
STEP: Saw pod success
Aug 27 01:19:43.617: INFO: Pod "var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b" satisfied condition "success or failure"
Aug 27 01:19:43.705: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b container dapi-container: 
STEP: delete the pod
Aug 27 01:19:43.813: INFO: Waiting for pod var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b to disappear
Aug 27 01:19:43.878: INFO: Pod var-expansion-d20ffa0a-945e-44ab-b55d-1f6064257d1b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:19:43.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9990" for this suite.

• [SLOW TEST:9.722 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1015,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:19:44.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:19:44.768: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 27 01:19:45.069: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 27 01:19:50.073: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 01:19:50.073: INFO: Creating deployment "test-rolling-update-deployment"
Aug 27 01:19:50.078: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 27 01:19:50.090: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 27 01:19:52.484: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 27 01:19:52.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:19:54.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734087990, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:19:56.801: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 27 01:19:56.811: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-173 /apis/apps/v1/namespaces/deployment-173/deployments/test-rolling-update-deployment 0fbc78e0-c8fb-4736-8f40-a1a111874898 4079737 1 2020-08-27 01:19:50 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e05e68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-27 01:19:50 +0000 UTC,LastTransitionTime:2020-08-27 01:19:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-08-27 01:19:56 +0000 UTC,LastTransitionTime:2020-08-27 01:19:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 01:19:56.814: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-173 /apis/apps/v1/namespaces/deployment-173/replicasets/test-rolling-update-deployment-67cf4f6444 341cec07-7ff0-4386-9b23-38787eb52805 4079726 1 2020-08-27 01:19:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 0fbc78e0-c8fb-4736-8f40-a1a111874898 0xc002b127e7 0xc002b127e8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b12908  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:19:56.814: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 27 01:19:56.814: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-173 /apis/apps/v1/namespaces/deployment-173/replicasets/test-rolling-update-controller e7c642d7-5abb-4051-a917-d6e093261536 4079735 2 2020-08-27 01:19:44 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 0fbc78e0-c8fb-4736-8f40-a1a111874898 0xc002b125af 0xc002b125c0}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b12648  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:19:56.817: INFO: Pod "test-rolling-update-deployment-67cf4f6444-rc759" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-rc759 test-rolling-update-deployment-67cf4f6444- deployment-173 /api/v1/namespaces/deployment-173/pods/test-rolling-update-deployment-67cf4f6444-rc759 cd69ec8c-52c1-4894-95e8-f1f6fa34db90 4079725 0 2020-08-27 01:19:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 341cec07-7ff0-4386-9b23-38787eb52805 0xc002aa44f7 0xc002aa44f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9nth9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9nth9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9nth9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:19:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:19:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:19:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.129,StartTime:2020-08-27 01:19:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:19:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://f9965a7074f62c28b661775be5f9d9babdf48c5026d1a110d22cad61beab8d01,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:19:56.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-173" for this suite.

• [SLOW TEST:12.818 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":63,"skipped":1019,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:19:56.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5238
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5238
STEP: creating replication controller externalsvc in namespace services-5238
I0827 01:19:57.472118       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5238, replica count: 2
I0827 01:20:00.522547       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:20:03.522786       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 27 01:20:03.890: INFO: Creating new exec pod
Aug 27 01:20:08.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5238 execpodzhsqk -- /bin/sh -x -c nslookup clusterip-service'
Aug 27 01:20:11.306: INFO: stderr: "I0827 01:20:11.215714     744 log.go:172] (0xc0008586e0) (0xc0006bdea0) Create stream\nI0827 01:20:11.215747     744 log.go:172] (0xc0008586e0) (0xc0006bdea0) Stream added, broadcasting: 1\nI0827 01:20:11.218460     744 log.go:172] (0xc0008586e0) Reply frame received for 1\nI0827 01:20:11.218502     744 log.go:172] (0xc0008586e0) (0xc0007d54a0) Create stream\nI0827 01:20:11.218517     744 log.go:172] (0xc0008586e0) (0xc0007d54a0) Stream added, broadcasting: 3\nI0827 01:20:11.219409     744 log.go:172] (0xc0008586e0) Reply frame received for 3\nI0827 01:20:11.219459     744 log.go:172] (0xc0008586e0) (0xc0007d5540) Create stream\nI0827 01:20:11.219473     744 log.go:172] (0xc0008586e0) (0xc0007d5540) Stream added, broadcasting: 5\nI0827 01:20:11.220322     744 log.go:172] (0xc0008586e0) Reply frame received for 5\nI0827 01:20:11.286424     744 log.go:172] (0xc0008586e0) Data frame received for 5\nI0827 01:20:11.286447     744 log.go:172] (0xc0007d5540) (5) Data frame handling\nI0827 01:20:11.286462     744 log.go:172] (0xc0007d5540) (5) Data frame sent\n+ nslookup clusterip-service\nI0827 01:20:11.293150     744 log.go:172] (0xc0008586e0) Data frame received for 3\nI0827 01:20:11.293169     744 log.go:172] (0xc0007d54a0) (3) Data frame handling\nI0827 01:20:11.293183     744 log.go:172] (0xc0007d54a0) (3) Data frame sent\nI0827 01:20:11.294377     744 log.go:172] (0xc0008586e0) Data frame received for 3\nI0827 01:20:11.294391     744 log.go:172] (0xc0007d54a0) (3) Data frame handling\nI0827 01:20:11.294403     744 log.go:172] (0xc0007d54a0) (3) Data frame sent\nI0827 01:20:11.295113     744 log.go:172] (0xc0008586e0) Data frame received for 3\nI0827 01:20:11.295128     744 log.go:172] (0xc0007d54a0) (3) Data frame handling\nI0827 01:20:11.295291     744 log.go:172] (0xc0008586e0) Data frame received for 5\nI0827 01:20:11.295305     744 log.go:172] (0xc0007d5540) (5) Data frame handling\nI0827 01:20:11.297178     744 log.go:172] (0xc0008586e0) Data frame received for 1\nI0827 01:20:11.297193     744 log.go:172] (0xc0006bdea0) (1) Data frame handling\nI0827 01:20:11.297202     744 log.go:172] (0xc0006bdea0) (1) Data frame sent\nI0827 01:20:11.297213     744 log.go:172] (0xc0008586e0) (0xc0006bdea0) Stream removed, broadcasting: 1\nI0827 01:20:11.297229     744 log.go:172] (0xc0008586e0) Go away received\nI0827 01:20:11.297533     744 log.go:172] (0xc0008586e0) (0xc0006bdea0) Stream removed, broadcasting: 1\nI0827 01:20:11.297547     744 log.go:172] (0xc0008586e0) (0xc0007d54a0) Stream removed, broadcasting: 3\nI0827 01:20:11.297553     744 log.go:172] (0xc0008586e0) (0xc0007d5540) Stream removed, broadcasting: 5\n"
Aug 27 01:20:11.306: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5238.svc.cluster.local\tcanonical name = externalsvc.services-5238.svc.cluster.local.\nName:\texternalsvc.services-5238.svc.cluster.local\nAddress: 10.97.46.219\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5238, will wait for the garbage collector to delete the pods
Aug 27 01:20:11.365: INFO: Deleting ReplicationController externalsvc took: 5.899171ms
Aug 27 01:20:11.666: INFO: Terminating ReplicationController externalsvc pods took: 301.202662ms
Aug 27 01:20:16.402: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:20:16.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5238" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.629 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":64,"skipped":1023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:20:16.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 27 01:20:20.593: INFO: &Pod{ObjectMeta:{send-events-e19994aa-271b-40f6-aa55-908603c432ed  events-6858 /api/v1/namespaces/events-6858/pods/send-events-e19994aa-271b-40f6-aa55-908603c432ed eb4b4647-2f72-4cf8-a580-0db51d234f29 4079907 0 2020-08-27 01:20:16 +0000 UTC   map[name:foo time:551469699] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2rvxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2rvxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2rvxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:20:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:20:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.132,StartTime:2020-08-27 01:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:20:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://54dd115fb2e52eb1381a75270292170fbc2b78cdca26bc63ee549e937a199884,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 27 01:20:22.629: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 27 01:20:24.634: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:20:24.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6858" for this suite.

• [SLOW TEST:8.198 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":65,"skipped":1121,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:20:24.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-395
STEP: creating replication controller nodeport-test in namespace services-395
I0827 01:20:24.899414       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-395, replica count: 2
I0827 01:20:27.949883       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:20:30.950145       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 01:20:30.950: INFO: Creating new exec pod
Aug 27 01:20:40.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-395 execpodf7nt4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 27 01:20:40.600: INFO: stderr: "I0827 01:20:40.505668     779 log.go:172] (0xc000987290) (0xc0009b63c0) Create stream\nI0827 01:20:40.505718     779 log.go:172] (0xc000987290) (0xc0009b63c0) Stream added, broadcasting: 1\nI0827 01:20:40.507824     779 log.go:172] (0xc000987290) Reply frame received for 1\nI0827 01:20:40.507859     779 log.go:172] (0xc000987290) (0xc000948280) Create stream\nI0827 01:20:40.507872     779 log.go:172] (0xc000987290) (0xc000948280) Stream added, broadcasting: 3\nI0827 01:20:40.508540     779 log.go:172] (0xc000987290) Reply frame received for 3\nI0827 01:20:40.508597     779 log.go:172] (0xc000987290) (0xc000ac48c0) Create stream\nI0827 01:20:40.508623     779 log.go:172] (0xc000987290) (0xc000ac48c0) Stream added, broadcasting: 5\nI0827 01:20:40.509465     779 log.go:172] (0xc000987290) Reply frame received for 5\nI0827 01:20:40.589760     779 log.go:172] (0xc000987290) Data frame received for 3\nI0827 01:20:40.589793     779 log.go:172] (0xc000948280) (3) Data frame handling\nI0827 01:20:40.589827     779 log.go:172] (0xc000987290) Data frame received for 5\nI0827 01:20:40.589835     779 log.go:172] (0xc000ac48c0) (5) Data frame handling\nI0827 01:20:40.589842     779 log.go:172] (0xc000ac48c0) (5) Data frame sent\nI0827 01:20:40.589847     779 log.go:172] (0xc000987290) Data frame received for 5\nI0827 01:20:40.589851     779 log.go:172] (0xc000ac48c0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0827 01:20:40.590954     779 log.go:172] (0xc000987290) Data frame received for 1\nI0827 01:20:40.590966     779 log.go:172] (0xc0009b63c0) (1) Data frame handling\nI0827 01:20:40.590972     779 log.go:172] (0xc0009b63c0) (1) Data frame sent\nI0827 01:20:40.591066     779 log.go:172] (0xc000987290) (0xc0009b63c0) Stream removed, broadcasting: 1\nI0827 01:20:40.591137     779 log.go:172] (0xc000987290) Go away received\nI0827 01:20:40.591351     779 log.go:172] (0xc000987290) (0xc0009b63c0) Stream removed, broadcasting: 1\nI0827 01:20:40.591372     779 log.go:172] (0xc000987290) (0xc000948280) Stream removed, broadcasting: 3\nI0827 01:20:40.591384     779 log.go:172] (0xc000987290) (0xc000ac48c0) Stream removed, broadcasting: 5\n"
Aug 27 01:20:40.600: INFO: stdout: ""
Aug 27 01:20:40.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-395 execpodf7nt4 -- /bin/sh -x -c nc -zv -t -w 2 10.99.127.236 80'
Aug 27 01:20:40.810: INFO: stderr: "I0827 01:20:40.726470     799 log.go:172] (0xc00092e9a0) (0xc0007c21e0) Create stream\nI0827 01:20:40.726526     799 log.go:172] (0xc00092e9a0) (0xc0007c21e0) Stream added, broadcasting: 1\nI0827 01:20:40.728683     799 log.go:172] (0xc00092e9a0) Reply frame received for 1\nI0827 01:20:40.728712     799 log.go:172] (0xc00092e9a0) (0xc0007514a0) Create stream\nI0827 01:20:40.728800     799 log.go:172] (0xc00092e9a0) (0xc0007514a0) Stream added, broadcasting: 3\nI0827 01:20:40.729445     799 log.go:172] (0xc00092e9a0) Reply frame received for 3\nI0827 01:20:40.729489     799 log.go:172] (0xc00092e9a0) (0xc0007c2280) Create stream\nI0827 01:20:40.729497     799 log.go:172] (0xc00092e9a0) (0xc0007c2280) Stream added, broadcasting: 5\nI0827 01:20:40.730298     799 log.go:172] (0xc00092e9a0) Reply frame received for 5\nI0827 01:20:40.802459     799 log.go:172] (0xc00092e9a0) Data frame received for 3\nI0827 01:20:40.802495     799 log.go:172] (0xc0007514a0) (3) Data frame handling\nI0827 01:20:40.802515     799 log.go:172] (0xc00092e9a0) Data frame received for 5\nI0827 01:20:40.802521     799 log.go:172] (0xc0007c2280) (5) Data frame handling\nI0827 01:20:40.802527     799 log.go:172] (0xc0007c2280) (5) Data frame sent\nI0827 01:20:40.802533     799 log.go:172] (0xc00092e9a0) Data frame received for 5\nI0827 01:20:40.802538     799 log.go:172] (0xc0007c2280) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.127.236 80\nConnection to 10.99.127.236 80 port [tcp/http] succeeded!\nI0827 01:20:40.803830     799 log.go:172] (0xc00092e9a0) Data frame received for 1\nI0827 01:20:40.803856     799 log.go:172] (0xc0007c21e0) (1) Data frame handling\nI0827 01:20:40.803869     799 log.go:172] (0xc0007c21e0) (1) Data frame sent\nI0827 01:20:40.803880     799 log.go:172] (0xc00092e9a0) (0xc0007c21e0) Stream removed, broadcasting: 1\nI0827 01:20:40.803959     799 log.go:172] (0xc00092e9a0) Go away received\nI0827 01:20:40.804173     799 log.go:172] (0xc00092e9a0) (0xc0007c21e0) Stream removed, broadcasting: 1\nI0827 01:20:40.804188     799 log.go:172] (0xc00092e9a0) (0xc0007514a0) Stream removed, broadcasting: 3\nI0827 01:20:40.804196     799 log.go:172] (0xc00092e9a0) (0xc0007c2280) Stream removed, broadcasting: 5\n"
Aug 27 01:20:40.810: INFO: stdout: ""
Aug 27 01:20:40.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-395 execpodf7nt4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30329'
Aug 27 01:20:41.426: INFO: stderr: "I0827 01:20:41.344579     820 log.go:172] (0xc000512c60) (0xc0005cbb80) Create stream\nI0827 01:20:41.344630     820 log.go:172] (0xc000512c60) (0xc0005cbb80) Stream added, broadcasting: 1\nI0827 01:20:41.346538     820 log.go:172] (0xc000512c60) Reply frame received for 1\nI0827 01:20:41.346563     820 log.go:172] (0xc000512c60) (0xc000836000) Create stream\nI0827 01:20:41.346569     820 log.go:172] (0xc000512c60) (0xc000836000) Stream added, broadcasting: 3\nI0827 01:20:41.347566     820 log.go:172] (0xc000512c60) Reply frame received for 3\nI0827 01:20:41.347626     820 log.go:172] (0xc000512c60) (0xc0008360a0) Create stream\nI0827 01:20:41.347644     820 log.go:172] (0xc000512c60) (0xc0008360a0) Stream added, broadcasting: 5\nI0827 01:20:41.349825     820 log.go:172] (0xc000512c60) Reply frame received for 5\nI0827 01:20:41.413210     820 log.go:172] (0xc000512c60) Data frame received for 5\nI0827 01:20:41.413241     820 log.go:172] (0xc0008360a0) (5) Data frame handling\nI0827 01:20:41.413260     820 log.go:172] (0xc0008360a0) (5) Data frame sent\nI0827 01:20:41.413268     820 log.go:172] (0xc000512c60) Data frame received for 5\nI0827 01:20:41.413274     820 log.go:172] (0xc0008360a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 30329\nConnection to 172.18.0.6 30329 port [tcp/30329] succeeded!\nI0827 01:20:41.413291     820 log.go:172] (0xc0008360a0) (5) Data frame sent\nI0827 01:20:41.413625     820 log.go:172] (0xc000512c60) Data frame received for 5\nI0827 01:20:41.413644     820 log.go:172] (0xc0008360a0) (5) Data frame handling\nI0827 01:20:41.413756     820 log.go:172] (0xc000512c60) Data frame received for 3\nI0827 01:20:41.413769     820 log.go:172] (0xc000836000) (3) Data frame handling\nI0827 01:20:41.415200     820 log.go:172] (0xc000512c60) Data frame received for 1\nI0827 01:20:41.415212     820 log.go:172] (0xc0005cbb80) (1) Data frame handling\nI0827 01:20:41.415223     820 log.go:172] (0xc0005cbb80) (1) Data frame sent\nI0827 01:20:41.415302     820 log.go:172] (0xc000512c60) (0xc0005cbb80) Stream removed, broadcasting: 1\nI0827 01:20:41.415321     820 log.go:172] (0xc000512c60) Go away received\nI0827 01:20:41.415579     820 log.go:172] (0xc000512c60) (0xc0005cbb80) Stream removed, broadcasting: 1\nI0827 01:20:41.415591     820 log.go:172] (0xc000512c60) (0xc000836000) Stream removed, broadcasting: 3\nI0827 01:20:41.415598     820 log.go:172] (0xc000512c60) (0xc0008360a0) Stream removed, broadcasting: 5\n"
Aug 27 01:20:41.426: INFO: stdout: ""
Aug 27 01:20:41.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-395 execpodf7nt4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 30329'
Aug 27 01:20:41.665: INFO: stderr: "I0827 01:20:41.584204     840 log.go:172] (0xc000946c60) (0xc00093e1e0) Create stream\nI0827 01:20:41.584260     840 log.go:172] (0xc000946c60) (0xc00093e1e0) Stream added, broadcasting: 1\nI0827 01:20:41.588372     840 log.go:172] (0xc000946c60) Reply frame received for 1\nI0827 01:20:41.588408     840 log.go:172] (0xc000946c60) (0xc0004f6820) Create stream\nI0827 01:20:41.588420     840 log.go:172] (0xc000946c60) (0xc0004f6820) Stream added, broadcasting: 3\nI0827 01:20:41.589451     840 log.go:172] (0xc000946c60) Reply frame received for 3\nI0827 01:20:41.589496     840 log.go:172] (0xc000946c60) (0xc0006f0a00) Create stream\nI0827 01:20:41.589517     840 log.go:172] (0xc000946c60) (0xc0006f0a00) Stream added, broadcasting: 5\nI0827 01:20:41.590349     840 log.go:172] (0xc000946c60) Reply frame received for 5\nI0827 01:20:41.660164     840 log.go:172] (0xc000946c60) Data frame received for 5\nI0827 01:20:41.660197     840 log.go:172] (0xc0006f0a00) (5) Data frame handling\nI0827 01:20:41.660210     840 log.go:172] (0xc0006f0a00) (5) Data frame sent\nI0827 01:20:41.660216     840 log.go:172] (0xc000946c60) Data frame received for 5\nI0827 01:20:41.660221     840 log.go:172] (0xc0006f0a00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.3 30329\nConnection to 172.18.0.3 30329 port [tcp/30329] succeeded!\nI0827 01:20:41.660239     840 log.go:172] (0xc000946c60) Data frame received for 3\nI0827 01:20:41.660243     840 log.go:172] (0xc0004f6820) (3) Data frame handling\nI0827 01:20:41.661310     840 log.go:172] (0xc000946c60) Data frame received for 1\nI0827 01:20:41.661343     840 log.go:172] (0xc00093e1e0) (1) Data frame handling\nI0827 01:20:41.661358     840 log.go:172] (0xc00093e1e0) (1) Data frame sent\nI0827 01:20:41.661370     840 log.go:172] (0xc000946c60) (0xc00093e1e0) Stream removed, broadcasting: 1\nI0827 01:20:41.661413     840 log.go:172] (0xc000946c60) Go away received\nI0827 01:20:41.661659     840 log.go:172] (0xc000946c60) (0xc00093e1e0) Stream removed, broadcasting: 1\nI0827 01:20:41.661678     840 log.go:172] (0xc000946c60) (0xc0004f6820) Stream removed, broadcasting: 3\nI0827 01:20:41.661686     840 log.go:172] (0xc000946c60) (0xc0006f0a00) Stream removed, broadcasting: 5\n"
Aug 27 01:20:41.665: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:20:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-395" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.018 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":66,"skipped":1123,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:20:41.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-12d78c4f-0b47-4952-bab0-92cebd4d66b6
STEP: Creating a pod to test consume secrets
Aug 27 01:20:42.309: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461" in namespace "projected-4341" to be "success or failure"
Aug 27 01:20:42.312: INFO: Pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127402ms
Aug 27 01:20:44.379: INFO: Pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069964103s
Aug 27 01:20:46.576: INFO: Pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266866223s
Aug 27 01:20:48.738: INFO: Pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429023474s
STEP: Saw pod success
Aug 27 01:20:48.738: INFO: Pod "pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461" satisfied condition "success or failure"
Aug 27 01:20:48.797: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461 container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 01:20:49.426: INFO: Waiting for pod pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461 to disappear
Aug 27 01:20:49.671: INFO: Pod pod-projected-secrets-97cb0c27-d523-4514-bd91-c59d8bfa5461 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:20:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4341" for this suite.

• [SLOW TEST:8.093 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1156,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:20:49.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:20:51.583: INFO: Waiting up to 5m0s for pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4" in namespace "security-context-test-2336" to be "success or failure"
Aug 27 01:20:51.949: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4": Phase="Pending", Reason="", readiness=false. Elapsed: 365.267302ms
Aug 27 01:20:54.087: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503983202s
Aug 27 01:20:56.604: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.020842418s
Aug 27 01:20:58.611: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027731073s
Aug 27 01:21:01.043: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.459921698s
Aug 27 01:21:01.043: INFO: Pod "busybox-user-65534-62795d16-5bab-4ba6-a550-9776042b95a4" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:21:01.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2336" for this suite.

• [SLOW TEST:11.531 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1159,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:21:01.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 27 01:21:02.153: INFO: Waiting up to 5m0s for pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc" in namespace "emptydir-9407" to be "success or failure"
Aug 27 01:21:02.164: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.34939ms
Aug 27 01:21:04.540: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387220933s
Aug 27 01:21:06.750: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596954375s
Aug 27 01:21:08.845: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.692313821s
Aug 27 01:21:10.876: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.723533083s
STEP: Saw pod success
Aug 27 01:21:10.876: INFO: Pod "pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc" satisfied condition "success or failure"
Aug 27 01:21:11.024: INFO: Trying to get logs from node jerma-worker pod pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc container test-container: 
STEP: delete the pod
Aug 27 01:21:11.890: INFO: Waiting for pod pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc to disappear
Aug 27 01:21:12.085: INFO: Pod pod-c949e7db-d5d4-41d4-b078-4acfd6975fbc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:21:12.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9407" for this suite.

• [SLOW TEST:10.799 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1161,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:21:12.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:21:13.716: INFO: Creating deployment "webserver-deployment"
Aug 27 01:21:13.743: INFO: Waiting for observed generation 1
Aug 27 01:21:16.355: INFO: Waiting for all required pods to come up
Aug 27 01:21:16.978: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 27 01:21:34.624: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 27 01:21:34.629: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 27 01:21:34.634: INFO: Updating deployment webserver-deployment
Aug 27 01:21:34.634: INFO: Waiting for observed generation 2
Aug 27 01:21:36.774: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 27 01:21:36.782: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 27 01:21:37.066: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 27 01:21:37.466: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 27 01:21:37.466: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 27 01:21:37.468: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 27 01:21:37.472: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 27 01:21:37.472: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 27 01:21:37.478: INFO: Updating deployment webserver-deployment
Aug 27 01:21:37.478: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 27 01:21:37.692: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 27 01:21:38.157: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 27 01:21:39.445: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-7087 /apis/apps/v1/namespaces/deployment-7087/deployments/webserver-deployment 1f01d848-cea1-4e92-bcd7-928175064f17 4080489 3 2020-08-27 01:21:13 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00314c398  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-08-27 01:21:35 +0000 UTC,LastTransitionTime:2020-08-27 01:21:13 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-27 01:21:37 +0000 UTC,LastTransitionTime:2020-08-27 01:21:37 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 27 01:21:39.619: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-7087 /apis/apps/v1/namespaces/deployment-7087/replicasets/webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 4080537 3 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1f01d848-cea1-4e92-bcd7-928175064f17 0xc002e04e57 0xc002e04e58}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e04ec8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:21:39.619: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 27 01:21:39.620: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-7087 /apis/apps/v1/namespaces/deployment-7087/replicasets/webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 4080540 3 2020-08-27 01:21:13 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1f01d848-cea1-4e92-bcd7-928175064f17 0xc002e04d97 0xc002e04d98}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e04df8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-2fsft" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2fsft webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-2fsft 7457c11f-8f27-49a0-abcd-bd32dfe33dc2 4080380 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314c8a7 0xc00314c8a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.119,StartTime:2020-08-27 01:21:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5be09941b593acc8ab548adbd8cb96df5b3e5bc998e3c483f54a7c0a614cc079,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.119,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-49kxk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-49kxk webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-49kxk a0575b86-0612-4196-bcb4-52f998ada297 4080393 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314ca27 0xc00314ca28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.138,StartTime:2020-08-27 01:21:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9ef62b29b605e2c432e28a0a5e802371ac847de95f5980deab11501babb99d75,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-5j5cv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5j5cv webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-5j5cv 0dece50c-3931-4866-a9d2-ff8db668d1af 4080507 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314cba7 0xc00314cba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-6ntgb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ntgb webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-6ntgb 773b1de3-cfe3-4469-99d8-5d312aeca516 4080386 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314cce7 0xc00314cce8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.137,StartTime:2020-08-27 01:21:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5a28155883031074bdc4fc164c68ff85b06caf6bba1d4c6b1d2437952a53d6d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-79t8k" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-79t8k webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-79t8k 084be2da-38d0-4f14-b09c-8e5167d6ffc0 4080506 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314ce67 0xc00314ce68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.866: INFO: Pod "webserver-deployment-595b5b9587-9zxl8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9zxl8 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-9zxl8 bb8f0e6b-447f-487b-89d7-f07399c91a3c 4080539 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314cf87 0xc00314cf88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-27 01:21:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-br7tb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-br7tb webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-br7tb 13ee9287-92cf-40e7-8bed-89081e8cee9b 4080530 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d0e7 0xc00314d0e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-bwjmg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bwjmg webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-bwjmg 36bdd145-d9e4-473a-87cc-f404d73c860c 4080508 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d207 0xc00314d208}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-ctt4b" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ctt4b webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-ctt4b e09234bf-a7f5-49a2-9b91-f0a9fc91c958 4080398 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d327 0xc00314d328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.136,StartTime:2020-08-27 01:21:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e74146b6450d83b2af35c45e276df3ec1621e1d29a93d0c40b400fb7020e829a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-dk6z7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dk6z7 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-dk6z7 92eb1b2c-5a13-4118-9cfa-fb7d8d60eb07 4080358 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d4a7 0xc00314d4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.135,StartTime:2020-08-27 01:21:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fa01da25a74f43864c9e5c18197d24c12478b9de07350e1bf871a252132e7446,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-g9bs8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-g9bs8 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-g9bs8 362845fc-f87c-48ba-be36-79ce8ec4a875 4080402 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d627 0xc00314d628}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.139,StartTime:2020-08-27 01:21:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7610f69445518d6cad9722e40f177a3179b247e2f9e4b65c8cfc5308a5cdc748,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.867: INFO: Pod "webserver-deployment-595b5b9587-h6x54" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h6x54 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-h6x54 b68ab4e8-82a3-49f7-899f-8873570e21b6 4080504 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d7c7 0xc00314d7c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-lmpfs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lmpfs webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-lmpfs 1541dd9e-c7d1-4056-ae09-4a5123ba48d5 4080373 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314d8f7 0xc00314d8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.117,StartTime:2020-08-27 01:21:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed6e6cba557887df4ae837d3adfd89a9c913cdcb950dbd52ba70f853b0822600,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-n6g5d" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-n6g5d webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-n6g5d c3d08e3e-50f1-48a1-b799-f6c151974584 4080409 0 2020-08-27 01:21:14 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314da77 0xc00314da78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.118,StartTime:2020-08-27 01:21:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:21:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6ac9875f45696cb8d46d2a85d4cc95e42f074ce077b605e3ed664cc368bae107,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-ncpx5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ncpx5 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-ncpx5 23b77686-efa2-43d3-a8fe-90a750ff0183 4080525 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314dbf7 0xc00314dbf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-nq9h5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nq9h5 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-nq9h5 8fadf47e-e2d5-4cab-88fe-dbb476bb34f8 4080559 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314dd27 0xc00314dd28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-27 01:21:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-p75b2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-p75b2 webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-p75b2 38a89c47-e923-4d69-bfa6-4bb72cf6866b 4080560 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc00314de87 0xc00314de88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-27 01:21:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-rx8tp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rx8tp webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-rx8tp 8c929a71-9f05-4fcc-815b-3610402e20d6 4080523 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc002eb4037 0xc002eb4038}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.868: INFO: Pod "webserver-deployment-595b5b9587-wcmjd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wcmjd webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-wcmjd d635b1c4-bc76-4b26-a838-e8ec9068c196 4080531 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc002eb4167 0xc002eb4168}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-595b5b9587-wlzvd" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wlzvd webserver-deployment-595b5b9587- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-595b5b9587-wlzvd fa50423a-72f6-4725-8df4-63dd20577fde 4080526 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3f726bf5-c8a5-4c3a-bd21-3143d251dce8 0xc002eb4407 0xc002eb4408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-2chst" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2chst webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-2chst 4a6f9294-27b7-4d69-91e3-0cdf2d6a05cf 4080534 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb45d7 0xc002eb45d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-4dpwt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4dpwt webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-4dpwt ba625f0d-49f0-440f-95e7-8f419afb4700 4080464 0 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb47c7 0xc002eb47c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-27 01:21:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-67dxj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-67dxj webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-67dxj 1d4665d4-8e29-452a-a354-c2d7e920654d 4080512 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb4ac7 0xc002eb4ac8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-7nrjd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7nrjd webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-7nrjd ecb08431-c769-439c-9ea3-8f447b52c8dd 4080553 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb4bf7 0xc002eb4bf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-27 01:21:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-bz5gz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bz5gz webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-bz5gz 4e888449-84a4-4857-a641-c78a6618219f 4080470 0 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb4e67 0xc002eb4e68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-27 01:21:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.869: INFO: Pod "webserver-deployment-c7997dcc8-kwllp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kwllp webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-kwllp 51270f7c-01eb-42b5-8cf6-353e0c1726b0 4080533 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb4ff7 0xc002eb4ff8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-m9rjm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m9rjm webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-m9rjm 4ebe2f63-77b3-449c-8d2a-5fc48a64be7d 4080442 0 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb5137 0xc002eb5138}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-08-27 01:21:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-mkvhf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mkvhf webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-mkvhf a0dc18d9-93c3-4611-8109-3f7b88faf4e0 4080441 0 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb52b7 0xc002eb52b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-27 01:21:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-qrnl4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qrnl4 webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-qrnl4 3f76448f-0a31-45cd-b3a2-273b67ef8297 4080535 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb5487 0xc002eb5488}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-skfbs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-skfbs webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-skfbs 745a52b7-3612-4d83-a588-a72a80dc7e7c 4080538 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb5777 0xc002eb5778}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-v5j5k" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v5j5k webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-v5j5k 3c72cb29-5b77-4e7a-a63b-feb9beb2dcc8 4080458 0 2020-08-27 01:21:34 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb58a7 0xc002eb58a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-27 01:21:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.870: INFO: Pod "webserver-deployment-c7997dcc8-whdmk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-whdmk webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-whdmk 58890abc-b9a5-4ce4-8b93-465b5bb71047 4080532 0 2020-08-27 01:21:38 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb5c67 0xc002eb5c68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:21:39.871: INFO: Pod "webserver-deployment-c7997dcc8-zlznb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zlznb webserver-deployment-c7997dcc8- deployment-7087 /api/v1/namespaces/deployment-7087/pods/webserver-deployment-c7997dcc8-zlznb cebdec3f-55af-4d21-9d8a-93c3b3dd9b7c 4080516 0 2020-08-27 01:21:37 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131366af-eb43-48b7-9667-c7bc75ae0474 0xc002eb5d97 0xc002eb5d98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xdklt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xdklt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xdklt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:21:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:21:39.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7087" for this suite.

• [SLOW TEST:28.765 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":70,"skipped":1176,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:21:40.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 27 01:21:51.307: INFO: 10 pods remaining
Aug 27 01:21:51.308: INFO: 7 pods has nil DeletionTimestamp
Aug 27 01:21:51.308: INFO: 
Aug 27 01:21:52.769: INFO: 0 pods remaining
Aug 27 01:21:52.769: INFO: 0 pods has nil DeletionTimestamp
Aug 27 01:21:52.769: INFO: 
Aug 27 01:21:54.219: INFO: 0 pods remaining
Aug 27 01:21:54.219: INFO: 0 pods has nil DeletionTimestamp
Aug 27 01:21:54.219: INFO: 
STEP: Gathering metrics
W0827 01:21:57.519774       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 01:21:57.519: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:21:57.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-473" for this suite.

• [SLOW TEST:17.459 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":71,"skipped":1185,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:21:58.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:19.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5377" for this suite.

• [SLOW TEST:21.160 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":72,"skipped":1195,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:19.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:22:19.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449" in namespace "projected-8911" to be "success or failure"
Aug 27 01:22:19.801: INFO: Pod "downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449": Phase="Pending", Reason="", readiness=false. Elapsed: 171.531343ms
Aug 27 01:22:21.804: INFO: Pod "downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174851443s
Aug 27 01:22:23.807: INFO: Pod "downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17774403s
STEP: Saw pod success
Aug 27 01:22:23.807: INFO: Pod "downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449" satisfied condition "success or failure"
Aug 27 01:22:23.809: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449 container client-container: 
STEP: delete the pod
Aug 27 01:22:23.874: INFO: Waiting for pod downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449 to disappear
Aug 27 01:22:23.884: INFO: Pod downwardapi-volume-b37eed47-e6c9-4992-8b25-54572b729449 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:23.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8911" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1200,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:23.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 01:22:24.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5649'
Aug 27 01:22:24.167: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 01:22:24.167: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Aug 27 01:22:26.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5649'
Aug 27 01:22:26.487: INFO: stderr: ""
Aug 27 01:22:26.487: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:26.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5649" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":74,"skipped":1200,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:26.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:22:26.630: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38" in namespace "security-context-test-6052" to be "success or failure"
Aug 27 01:22:27.337: INFO: Pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38": Phase="Pending", Reason="", readiness=false. Elapsed: 707.7237ms
Aug 27 01:22:29.462: INFO: Pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.832464106s
Aug 27 01:22:31.466: INFO: Pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.836784723s
Aug 27 01:22:33.470: INFO: Pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.840834641s
Aug 27 01:22:33.470: INFO: Pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38" satisfied condition "success or failure"
Aug 27 01:22:33.490: INFO: Got logs for pod "busybox-privileged-false-f70c9c29-2d21-48aa-b178-27cc6dde1e38": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:33.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6052" for this suite.

• [SLOW TEST:6.980 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1212,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:33.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Aug 27 01:22:33.886: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:33.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2483" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":76,"skipped":1215,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:33.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:40.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-190" for this suite.

• [SLOW TEST:7.519 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1232,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:41.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8731ec38-c975-4a35-9a40-aa5aefc7b80d
STEP: Creating a pod to test consume secrets
Aug 27 01:22:43.098: INFO: Waiting up to 5m0s for pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746" in namespace "secrets-3187" to be "success or failure"
Aug 27 01:22:43.319: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 221.595216ms
Aug 27 01:22:45.583: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485542863s
Aug 27 01:22:47.767: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669727911s
Aug 27 01:22:50.223: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 7.125117528s
Aug 27 01:22:52.413: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 9.315233597s
Aug 27 01:22:54.599: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Pending", Reason="", readiness=false. Elapsed: 11.501388472s
Aug 27 01:22:56.690: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Running", Reason="", readiness=true. Elapsed: 13.592372858s
Aug 27 01:22:58.846: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.748059771s
STEP: Saw pod success
Aug 27 01:22:58.846: INFO: Pod "pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746" satisfied condition "success or failure"
Aug 27 01:22:58.848: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746 container secret-env-test: 
STEP: delete the pod
Aug 27 01:22:58.920: INFO: Waiting for pod pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746 to disappear
Aug 27 01:22:59.085: INFO: Pod pod-secrets-b74c86fe-97a4-41c0-a305-613d846cb746 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:22:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3187" for this suite.

• [SLOW TEST:17.591 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1233,"failed":0}
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:22:59.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5507
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5507
STEP: creating replication controller externalsvc in namespace services-5507
I0827 01:23:00.813448       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5507, replica count: 2
I0827 01:23:03.863885       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:23:06.864175       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:23:09.864416       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:23:12.864675       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 27 01:23:13.816: INFO: Creating new exec pod
Aug 27 01:23:22.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5507 execpodwjxbf -- /bin/sh -x -c nslookup nodeport-service'
Aug 27 01:23:22.706: INFO: stderr: "I0827 01:23:22.624567     917 log.go:172] (0xc0003c2c60) (0xc0006e1ae0) Create stream\nI0827 01:23:22.624642     917 log.go:172] (0xc0003c2c60) (0xc0006e1ae0) Stream added, broadcasting: 1\nI0827 01:23:22.627228     917 log.go:172] (0xc0003c2c60) Reply frame received for 1\nI0827 01:23:22.627277     917 log.go:172] (0xc0003c2c60) (0xc00096c000) Create stream\nI0827 01:23:22.627288     917 log.go:172] (0xc0003c2c60) (0xc00096c000) Stream added, broadcasting: 3\nI0827 01:23:22.628231     917 log.go:172] (0xc0003c2c60) Reply frame received for 3\nI0827 01:23:22.628263     917 log.go:172] (0xc0003c2c60) (0xc0006e1cc0) Create stream\nI0827 01:23:22.628274     917 log.go:172] (0xc0003c2c60) (0xc0006e1cc0) Stream added, broadcasting: 5\nI0827 01:23:22.629486     917 log.go:172] (0xc0003c2c60) Reply frame received for 5\nI0827 01:23:22.687256     917 log.go:172] (0xc0003c2c60) Data frame received for 5\nI0827 01:23:22.687285     917 log.go:172] (0xc0006e1cc0) (5) Data frame handling\nI0827 01:23:22.687300     917 log.go:172] (0xc0006e1cc0) (5) Data frame sent\n+ nslookup nodeport-service\nI0827 01:23:22.694444     917 log.go:172] (0xc0003c2c60) Data frame received for 3\nI0827 01:23:22.694459     917 log.go:172] (0xc00096c000) (3) Data frame handling\nI0827 01:23:22.694470     917 log.go:172] (0xc00096c000) (3) Data frame sent\nI0827 01:23:22.695291     917 log.go:172] (0xc0003c2c60) Data frame received for 3\nI0827 01:23:22.695301     917 log.go:172] (0xc00096c000) (3) Data frame handling\nI0827 01:23:22.695311     917 log.go:172] (0xc00096c000) (3) Data frame sent\nI0827 01:23:22.695670     917 log.go:172] (0xc0003c2c60) Data frame received for 5\nI0827 01:23:22.695691     917 log.go:172] (0xc0006e1cc0) (5) Data frame handling\nI0827 01:23:22.695892     917 log.go:172] (0xc0003c2c60) Data frame received for 3\nI0827 01:23:22.695908     917 log.go:172] (0xc00096c000) (3) Data frame handling\nI0827 01:23:22.697495     917 log.go:172] (0xc0003c2c60) Data frame received for 1\nI0827 01:23:22.697517     917 log.go:172] (0xc0006e1ae0) (1) Data frame handling\nI0827 01:23:22.697532     917 log.go:172] (0xc0006e1ae0) (1) Data frame sent\nI0827 01:23:22.697550     917 log.go:172] (0xc0003c2c60) (0xc0006e1ae0) Stream removed, broadcasting: 1\nI0827 01:23:22.697568     917 log.go:172] (0xc0003c2c60) Go away received\nI0827 01:23:22.697911     917 log.go:172] (0xc0003c2c60) (0xc0006e1ae0) Stream removed, broadcasting: 1\nI0827 01:23:22.697926     917 log.go:172] (0xc0003c2c60) (0xc00096c000) Stream removed, broadcasting: 3\nI0827 01:23:22.697934     917 log.go:172] (0xc0003c2c60) (0xc0006e1cc0) Stream removed, broadcasting: 5\n"
Aug 27 01:23:22.706: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5507.svc.cluster.local\tcanonical name = externalsvc.services-5507.svc.cluster.local.\nName:\texternalsvc.services-5507.svc.cluster.local\nAddress: 10.108.253.6\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5507, will wait for the garbage collector to delete the pods
Aug 27 01:23:23.243: INFO: Deleting ReplicationController externalsvc took: 15.939281ms
Aug 27 01:23:23.843: INFO: Terminating ReplicationController externalsvc pods took: 600.259657ms
Aug 27 01:23:41.842: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:23:41.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5507" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:42.798 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":79,"skipped":1233,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:23:41.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 01:23:46.127: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:23:46.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4278" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1252,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:23:46.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 27 01:23:52.913: INFO: Successfully updated pod "adopt-release-58fpv"
STEP: Checking that the Job readopts the Pod
Aug 27 01:23:52.913: INFO: Waiting up to 15m0s for pod "adopt-release-58fpv" in namespace "job-5033" to be "adopted"
Aug 27 01:23:52.917: INFO: Pod "adopt-release-58fpv": Phase="Running", Reason="", readiness=true. Elapsed: 4.634917ms
Aug 27 01:23:54.936: INFO: Pod "adopt-release-58fpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.023727464s
Aug 27 01:23:54.937: INFO: Pod "adopt-release-58fpv" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 27 01:23:56.128: INFO: Successfully updated pod "adopt-release-58fpv"
STEP: Checking that the Job releases the Pod
Aug 27 01:23:56.128: INFO: Waiting up to 15m0s for pod "adopt-release-58fpv" in namespace "job-5033" to be "released"
Aug 27 01:23:56.131: INFO: Pod "adopt-release-58fpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.909665ms
Aug 27 01:23:58.519: INFO: Pod "adopt-release-58fpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.391021243s
Aug 27 01:23:58.519: INFO: Pod "adopt-release-58fpv" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:23:58.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5033" for this suite.

• [SLOW TEST:12.308 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":81,"skipped":1268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:23:58.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9295
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 27 01:24:01.212: INFO: Found 0 stateful pods, waiting for 3
Aug 27 01:24:11.216: INFO: Found 2 stateful pods, waiting for 3
Aug 27 01:24:21.216: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 01:24:21.216: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 01:24:21.216: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 01:24:31.242: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 01:24:31.242: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 01:24:31.242: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 01:24:31.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9295 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 01:24:31.494: INFO: stderr: "I0827 01:24:31.382819     938 log.go:172] (0xc0009578c0) (0xc0009daa00) Create stream\nI0827 01:24:31.382881     938 log.go:172] (0xc0009578c0) (0xc0009daa00) Stream added, broadcasting: 1\nI0827 01:24:31.384961     938 log.go:172] (0xc0009578c0) Reply frame received for 1\nI0827 01:24:31.385003     938 log.go:172] (0xc0009578c0) (0xc00063fcc0) Create stream\nI0827 01:24:31.385016     938 log.go:172] (0xc0009578c0) (0xc00063fcc0) Stream added, broadcasting: 3\nI0827 01:24:31.385895     938 log.go:172] (0xc0009578c0) Reply frame received for 3\nI0827 01:24:31.385926     938 log.go:172] (0xc0009578c0) (0xc0008fe140) Create stream\nI0827 01:24:31.385938     938 log.go:172] (0xc0009578c0) (0xc0008fe140) Stream added, broadcasting: 5\nI0827 01:24:31.386672     938 log.go:172] (0xc0009578c0) Reply frame received for 5\nI0827 01:24:31.443231     938 log.go:172] (0xc0009578c0) Data frame received for 5\nI0827 01:24:31.443259     938 log.go:172] (0xc0008fe140) (5) Data frame handling\nI0827 01:24:31.443282     938 log.go:172] (0xc0008fe140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 01:24:31.485427     938 log.go:172] (0xc0009578c0) Data frame received for 3\nI0827 01:24:31.485468     938 log.go:172] (0xc00063fcc0) (3) Data frame handling\nI0827 01:24:31.485494     938 log.go:172] (0xc00063fcc0) (3) Data frame sent\nI0827 01:24:31.486112     938 log.go:172] (0xc0009578c0) Data frame received for 5\nI0827 01:24:31.486174     938 log.go:172] (0xc0008fe140) (5) Data frame handling\nI0827 01:24:31.486212     938 log.go:172] (0xc0009578c0) Data frame received for 3\nI0827 01:24:31.486228     938 log.go:172] (0xc00063fcc0) (3) Data frame handling\nI0827 01:24:31.487701     938 log.go:172] (0xc0009578c0) Data frame received for 1\nI0827 01:24:31.487735     938 log.go:172] (0xc0009daa00) (1) Data frame handling\nI0827 01:24:31.487756     938 log.go:172] (0xc0009daa00) (1) Data frame sent\nI0827 01:24:31.487782     938 log.go:172] (0xc0009578c0) (0xc0009daa00) Stream removed, broadcasting: 1\nI0827 01:24:31.487985     938 log.go:172] (0xc0009578c0) Go away received\nI0827 01:24:31.488284     938 log.go:172] (0xc0009578c0) (0xc0009daa00) Stream removed, broadcasting: 1\nI0827 01:24:31.488307     938 log.go:172] (0xc0009578c0) (0xc00063fcc0) Stream removed, broadcasting: 3\nI0827 01:24:31.488318     938 log.go:172] (0xc0009578c0) (0xc0008fe140) Stream removed, broadcasting: 5\n"
Aug 27 01:24:31.495: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 01:24:31.495: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 27 01:24:41.575: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 27 01:24:51.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9295 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 01:24:51.952: INFO: stderr: "I0827 01:24:51.875648     953 log.go:172] (0xc0007c88f0) (0xc0009241e0) Create stream\nI0827 01:24:51.875697     953 log.go:172] (0xc0007c88f0) (0xc0009241e0) Stream added, broadcasting: 1\nI0827 01:24:51.878036     953 log.go:172] (0xc0007c88f0) Reply frame received for 1\nI0827 01:24:51.878071     953 log.go:172] (0xc0007c88f0) (0xc0002ef4a0) Create stream\nI0827 01:24:51.878087     953 log.go:172] (0xc0007c88f0) (0xc0002ef4a0) Stream added, broadcasting: 3\nI0827 01:24:51.878764     953 log.go:172] (0xc0007c88f0) Reply frame received for 3\nI0827 01:24:51.878800     953 log.go:172] (0xc0007c88f0) (0xc0006a7ae0) Create stream\nI0827 01:24:51.878810     953 log.go:172] (0xc0007c88f0) (0xc0006a7ae0) Stream added, broadcasting: 5\nI0827 01:24:51.879577     953 log.go:172] (0xc0007c88f0) Reply frame received for 5\nI0827 01:24:51.941128     953 log.go:172] (0xc0007c88f0) Data frame received for 3\nI0827 01:24:51.941161     953 log.go:172] (0xc0002ef4a0) (3) Data frame handling\nI0827 01:24:51.941180     953 log.go:172] (0xc0002ef4a0) (3) Data frame sent\nI0827 01:24:51.941262     953 log.go:172] (0xc0007c88f0) Data frame received for 5\nI0827 01:24:51.941276     953 log.go:172] (0xc0006a7ae0) (5) Data frame handling\nI0827 01:24:51.941293     953 log.go:172] (0xc0006a7ae0) (5) Data frame sent\nI0827 01:24:51.941305     953 log.go:172] (0xc0007c88f0) Data frame received for 5\nI0827 01:24:51.941312     953 log.go:172] (0xc0006a7ae0) (5) Data frame handling\nI0827 01:24:51.941324     953 log.go:172] (0xc0007c88f0) Data frame received for 3\nI0827 01:24:51.941340     953 log.go:172] (0xc0002ef4a0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 01:24:51.942667     953 log.go:172] (0xc0007c88f0) Data frame received for 1\nI0827 01:24:51.942679     953 log.go:172] (0xc0009241e0) (1) Data frame handling\nI0827 01:24:51.942689     953 log.go:172] (0xc0009241e0) (1) Data frame sent\nI0827 01:24:51.942699     953 log.go:172] (0xc0007c88f0) (0xc0009241e0) Stream removed, broadcasting: 1\nI0827 01:24:51.942709     953 log.go:172] (0xc0007c88f0) Go away received\nI0827 01:24:51.943059     953 log.go:172] (0xc0007c88f0) (0xc0009241e0) Stream removed, broadcasting: 1\nI0827 01:24:51.943078     953 log.go:172] (0xc0007c88f0) (0xc0002ef4a0) Stream removed, broadcasting: 3\nI0827 01:24:51.943088     953 log.go:172] (0xc0007c88f0) (0xc0006a7ae0) Stream removed, broadcasting: 5\n"
Aug 27 01:24:51.952: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 01:24:51.952: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 01:25:01.997: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:25:01.997: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 01:25:01.997: INFO: Waiting for Pod statefulset-9295/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 01:25:12.002: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:25:12.002: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 01:25:12.002: INFO: Waiting for Pod statefulset-9295/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 01:25:22.003: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:25:22.003: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 01:25:32.131: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 27 01:25:42.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9295 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 01:25:42.330: INFO: stderr: "I0827 01:25:42.133283     974 log.go:172] (0xc000bd7600) (0xc000bc6820) Create stream\nI0827 01:25:42.133346     974 log.go:172] (0xc000bd7600) (0xc000bc6820) Stream added, broadcasting: 1\nI0827 01:25:42.137711     974 log.go:172] (0xc000bd7600) Reply frame received for 1\nI0827 01:25:42.137760     974 log.go:172] (0xc000bd7600) (0xc00070da40) Create stream\nI0827 01:25:42.137775     974 log.go:172] (0xc000bd7600) (0xc00070da40) Stream added, broadcasting: 3\nI0827 01:25:42.138739     974 log.go:172] (0xc000bd7600) Reply frame received for 3\nI0827 01:25:42.138769     974 log.go:172] (0xc000bd7600) (0xc00070dae0) Create stream\nI0827 01:25:42.138779     974 log.go:172] (0xc000bd7600) (0xc00070dae0) Stream added, broadcasting: 5\nI0827 01:25:42.139732     974 log.go:172] (0xc000bd7600) Reply frame received for 5\nI0827 01:25:42.205932     974 log.go:172] (0xc000bd7600) Data frame received for 5\nI0827 01:25:42.205959     974 log.go:172] (0xc00070dae0) (5) Data frame handling\nI0827 01:25:42.205975     974 log.go:172] (0xc00070dae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 01:25:42.318909     974 log.go:172] (0xc000bd7600) Data frame received for 5\nI0827 01:25:42.318941     974 log.go:172] (0xc00070dae0) (5) Data frame handling\nI0827 01:25:42.318981     974 log.go:172] (0xc000bd7600) Data frame received for 3\nI0827 01:25:42.319021     974 log.go:172] (0xc00070da40) (3) Data frame handling\nI0827 01:25:42.319042     974 log.go:172] (0xc00070da40) (3) Data frame sent\nI0827 01:25:42.319062     974 log.go:172] (0xc000bd7600) Data frame received for 3\nI0827 01:25:42.319076     974 log.go:172] (0xc00070da40) (3) Data frame handling\nI0827 01:25:42.320556     974 log.go:172] (0xc000bd7600) Data frame received for 1\nI0827 01:25:42.320575     974 log.go:172] (0xc000bc6820) (1) Data frame handling\nI0827 01:25:42.320590     974 log.go:172] (0xc000bc6820) (1) Data frame sent\nI0827 01:25:42.320601     974 log.go:172] (0xc000bd7600) (0xc000bc6820) Stream removed, broadcasting: 1\nI0827 01:25:42.320611     974 log.go:172] (0xc000bd7600) Go away received\nI0827 01:25:42.320999     974 log.go:172] (0xc000bd7600) (0xc000bc6820) Stream removed, broadcasting: 1\nI0827 01:25:42.321013     974 log.go:172] (0xc000bd7600) (0xc00070da40) Stream removed, broadcasting: 3\nI0827 01:25:42.321018     974 log.go:172] (0xc000bd7600) (0xc00070dae0) Stream removed, broadcasting: 5\n"
Aug 27 01:25:42.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 01:25:42.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 01:25:52.417: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 27 01:26:02.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9295 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 01:26:02.735: INFO: stderr: "I0827 01:26:02.653643     993 log.go:172] (0xc00020ca50) (0xc0005dbe00) Create stream\nI0827 01:26:02.653706     993 log.go:172] (0xc00020ca50) (0xc0005dbe00) Stream added, broadcasting: 1\nI0827 01:26:02.656026     993 log.go:172] (0xc00020ca50) Reply frame received for 1\nI0827 01:26:02.656064     993 log.go:172] (0xc00020ca50) (0xc0008b8000) Create stream\nI0827 01:26:02.656082     993 log.go:172] (0xc00020ca50) (0xc0008b8000) Stream added, broadcasting: 3\nI0827 01:26:02.657001     993 log.go:172] (0xc00020ca50) Reply frame received for 3\nI0827 01:26:02.657032     993 log.go:172] (0xc00020ca50) (0xc0007b08c0) Create stream\nI0827 01:26:02.657050     993 log.go:172] (0xc00020ca50) (0xc0007b08c0) Stream added, broadcasting: 5\nI0827 01:26:02.657944     993 log.go:172] (0xc00020ca50) Reply frame received for 5\nI0827 01:26:02.725318     993 log.go:172] (0xc00020ca50) Data frame received for 3\nI0827 01:26:02.725431     993 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0827 01:26:02.725451     993 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0827 01:26:02.725482     993 log.go:172] (0xc00020ca50) Data frame received for 5\nI0827 01:26:02.725494     993 log.go:172] (0xc0007b08c0) (5) Data frame handling\nI0827 01:26:02.725506     993 log.go:172] (0xc0007b08c0) (5) Data frame sent\nI0827 01:26:02.725515     993 log.go:172] (0xc00020ca50) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 01:26:02.725520     993 log.go:172] (0xc0007b08c0) (5) Data frame handling\nI0827 01:26:02.725585     993 log.go:172] (0xc00020ca50) Data frame received for 3\nI0827 01:26:02.725623     993 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0827 01:26:02.726904     993 log.go:172] (0xc00020ca50) Data frame received for 1\nI0827 01:26:02.726933     993 log.go:172] (0xc0005dbe00) (1) Data frame handling\nI0827 01:26:02.726954     993 log.go:172] (0xc0005dbe00) (1) Data frame sent\nI0827 01:26:02.726970     993 log.go:172] (0xc00020ca50) (0xc0005dbe00) Stream removed, broadcasting: 1\nI0827 01:26:02.726988     993 log.go:172] (0xc00020ca50) Go away received\nI0827 01:26:02.727311     993 log.go:172] (0xc00020ca50) (0xc0005dbe00) Stream removed, broadcasting: 1\nI0827 01:26:02.727326     993 log.go:172] (0xc00020ca50) (0xc0008b8000) Stream removed, broadcasting: 3\nI0827 01:26:02.727333     993 log.go:172] (0xc00020ca50) (0xc0007b08c0) Stream removed, broadcasting: 5\n"
Aug 27 01:26:02.735: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 01:26:02.735: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 01:26:12.752: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:26:12.752: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 27 01:26:12.752: INFO: Waiting for Pod statefulset-9295/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 27 01:26:22.822: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:26:22.822: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 27 01:26:32.757: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:26:32.757: INFO: Waiting for Pod statefulset-9295/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 27 01:26:43.389: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
Aug 27 01:26:52.759: INFO: Waiting for StatefulSet statefulset-9295/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 01:27:03.170: INFO: Deleting all statefulset in ns statefulset-9295
Aug 27 01:27:03.172: INFO: Scaling statefulset ss2 to 0
Aug 27 01:27:33.690: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 01:27:33.692: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:27:33.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9295" for this suite.

• [SLOW TEST:215.150 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":82,"skipped":1295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:27:33.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 27 01:27:41.292: INFO: Successfully updated pod "labelsupdate4e4d2ec3-a291-4564-aa86-88958896fd7c"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:27:41.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9027" for this suite.

• [SLOW TEST:8.187 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:27:41.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-2e74b734-02b2-4d33-8c49-1d12149df77b
STEP: Creating a pod to test consume secrets
Aug 27 01:27:42.298: INFO: Waiting up to 5m0s for pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098" in namespace "secrets-525" to be "success or failure"
Aug 27 01:27:42.320: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098": Phase="Pending", Reason="", readiness=false. Elapsed: 22.255497ms
Aug 27 01:27:44.549: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25053841s
Aug 27 01:27:46.551: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252876196s
Aug 27 01:27:48.554: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098": Phase="Running", Reason="", readiness=true. Elapsed: 6.256172728s
Aug 27 01:27:50.558: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.260378117s
STEP: Saw pod success
Aug 27 01:27:50.558: INFO: Pod "pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098" satisfied condition "success or failure"
Aug 27 01:27:50.562: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098 container secret-volume-test: 
STEP: delete the pod
Aug 27 01:27:50.589: INFO: Waiting for pod pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098 to disappear
Aug 27 01:27:50.599: INFO: Pod pod-secrets-c8fde3ad-07b8-49f8-b1bf-a7af2250d098 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:27:50.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-525" for this suite.

• [SLOW TEST:8.689 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1370,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:27:50.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-ngpf
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 01:27:50.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ngpf" in namespace "subpath-4894" to be "success or failure"
Aug 27 01:27:50.742: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.236099ms
Aug 27 01:27:52.757: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053690146s
Aug 27 01:27:54.761: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 4.057617691s
Aug 27 01:27:56.766: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 6.062037234s
Aug 27 01:27:58.770: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 8.066294549s
Aug 27 01:28:00.774: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 10.070609094s
Aug 27 01:28:02.778: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 12.074602064s
Aug 27 01:28:04.782: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 14.077875056s
Aug 27 01:28:06.786: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 16.082256685s
Aug 27 01:28:08.790: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 18.086415961s
Aug 27 01:28:10.794: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 20.090335303s
Aug 27 01:28:12.848: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 22.144232752s
Aug 27 01:28:14.852: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Running", Reason="", readiness=true. Elapsed: 24.147847655s
Aug 27 01:28:16.856: INFO: Pod "pod-subpath-test-downwardapi-ngpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.152113512s
STEP: Saw pod success
Aug 27 01:28:16.856: INFO: Pod "pod-subpath-test-downwardapi-ngpf" satisfied condition "success or failure"
Aug 27 01:28:16.859: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-ngpf container test-container-subpath-downwardapi-ngpf: 
STEP: delete the pod
Aug 27 01:28:16.909: INFO: Waiting for pod pod-subpath-test-downwardapi-ngpf to disappear
Aug 27 01:28:16.919: INFO: Pod pod-subpath-test-downwardapi-ngpf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ngpf
Aug 27 01:28:16.919: INFO: Deleting pod "pod-subpath-test-downwardapi-ngpf" in namespace "subpath-4894"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:28:16.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4894" for this suite.

• [SLOW TEST:26.321 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":85,"skipped":1383,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:28:16.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Aug 27 01:28:16.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 27 01:28:17.072: INFO: stderr: ""
Aug 27 01:28:17.072: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37695/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:28:17.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7242" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":86,"skipped":1386,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:28:17.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:28:17.147: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 27 01:28:22.477: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 01:28:22.477: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 27 01:28:24.598: INFO: Creating deployment "test-rollover-deployment"
Aug 27 01:28:24.669: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 27 01:28:26.684: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 27 01:28:27.124: INFO: Ensure that both replica sets have 1 created replica
Aug 27 01:28:27.577: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 27 01:28:27.852: INFO: Updating deployment test-rollover-deployment
Aug 27 01:28:27.852: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 27 01:28:30.695: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 27 01:28:30.945: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 27 01:28:31.022: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:31.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088510, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:33.170: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:33.170: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088510, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:35.035: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:35.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:37.049: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:37.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:39.030: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:39.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:41.029: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:41.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:43.031: INFO: all replica sets need to contain the pod-template-hash label
Aug 27 01:28:43.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088505, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088513, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088504, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:28:45.029: INFO: 
Aug 27 01:28:45.029: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 27 01:28:45.037: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6113 /apis/apps/v1/namespaces/deployment-6113/deployments/test-rollover-deployment e5288856-e4d7-4ec6-9024-e82de252d2f9 4082950 2 2020-08-27 01:28:24 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00309e248  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-27 01:28:25 +0000 UTC,LastTransitionTime:2020-08-27 01:28:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-08-27 01:28:44 +0000 UTC,LastTransitionTime:2020-08-27 01:28:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 27 01:28:45.040: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-6113 /apis/apps/v1/namespaces/deployment-6113/replicasets/test-rollover-deployment-574d6dfbff ab54e77d-1990-4b03-a6f2-099610616fd0 4082938 2 2020-08-27 01:28:27 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e5288856-e4d7-4ec6-9024-e82de252d2f9 0xc00309e9d7 0xc00309e9d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00309ea68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:28:45.040: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 27 01:28:45.040: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6113 /apis/apps/v1/namespaces/deployment-6113/replicasets/test-rollover-controller 06a6cdb0-f357-40b6-8b6f-7fb7f6357be2 4082949 2 2020-08-27 01:28:17 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e5288856-e4d7-4ec6-9024-e82de252d2f9 0xc00309e857 0xc00309e858}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00309e8d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:28:45.040: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-6113 /apis/apps/v1/namespaces/deployment-6113/replicasets/test-rollover-deployment-f6c94f66c 40a9b65c-8f2a-4220-b410-8d82691bcbb1 4082891 2 2020-08-27 01:28:24 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e5288856-e4d7-4ec6-9024-e82de252d2f9 0xc00309eb20 0xc00309eb21}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00309eb98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:28:45.043: INFO: Pod "test-rollover-deployment-574d6dfbff-5pfzp" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5pfzp test-rollover-deployment-574d6dfbff- deployment-6113 /api/v1/namespaces/deployment-6113/pods/test-rollover-deployment-574d6dfbff-5pfzp 2b840ca5-5bb5-4a4c-a51b-174cae699a09 4082906 0 2020-08-27 01:28:29 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff ab54e77d-1990-4b03-a6f2-099610616fd0 0xc00309f437 0xc00309f438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xxj29,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xxj29,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xxj29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:28:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:28:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:28:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:28:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.149,StartTime:2020-08-27 01:28:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:28:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6073c7ef9424ad841f46a4e635915e2bc44950d61030f61a51d31930d650076a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:28:45.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6113" for this suite.

• [SLOW TEST:27.972 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":87,"skipped":1387,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:28:45.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-a98530e8-24fb-479f-9e73-d9321ce515e0
STEP: Creating a pod to test consume configMaps
Aug 27 01:28:45.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605" in namespace "projected-9521" to be "success or failure"
Aug 27 01:28:45.345: INFO: Pod "pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605": Phase="Pending", Reason="", readiness=false. Elapsed: 75.161602ms
Aug 27 01:28:47.349: INFO: Pod "pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079478511s
Aug 27 01:28:49.354: INFO: Pod "pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084427378s
STEP: Saw pod success
Aug 27 01:28:49.354: INFO: Pod "pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605" satisfied condition "success or failure"
Aug 27 01:28:49.356: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 01:28:49.438: INFO: Waiting for pod pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605 to disappear
Aug 27 01:28:49.514: INFO: Pod pod-projected-configmaps-ea5f18c7-e1a5-4621-a09b-2734e6585605 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:28:49.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9521" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1387,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:28:49.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:28:49.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f" in namespace "downward-api-2987" to be "success or failure"
Aug 27 01:28:49.750: INFO: Pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f": Phase="Pending", Reason="", readiness=false. Elapsed: 61.241944ms
Aug 27 01:28:51.808: INFO: Pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11887063s
Aug 27 01:28:53.812: INFO: Pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f": Phase="Running", Reason="", readiness=true. Elapsed: 4.122614986s
Aug 27 01:28:55.815: INFO: Pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125845527s
STEP: Saw pod success
Aug 27 01:28:55.815: INFO: Pod "downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f" satisfied condition "success or failure"
Aug 27 01:28:55.817: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f container client-container: 
STEP: delete the pod
Aug 27 01:28:56.037: INFO: Waiting for pod downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f to disappear
Aug 27 01:28:56.232: INFO: Pod downwardapi-volume-42a9cb10-2d2e-422a-a0b3-4e854358b30f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:28:56.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2987" for this suite.

• [SLOW TEST:6.714 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1391,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:28:56.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 27 01:28:56.358: INFO: Waiting up to 5m0s for pod "pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9" in namespace "emptydir-3563" to be "success or failure"
Aug 27 01:28:56.395: INFO: Pod "pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.777685ms
Aug 27 01:28:58.404: INFO: Pod "pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045915397s
Aug 27 01:29:00.441: INFO: Pod "pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083598472s
STEP: Saw pod success
Aug 27 01:29:00.442: INFO: Pod "pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9" satisfied condition "success or failure"
Aug 27 01:29:00.444: INFO: Trying to get logs from node jerma-worker pod pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9 container test-container: 
STEP: delete the pod
Aug 27 01:29:00.539: INFO: Waiting for pod pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9 to disappear
Aug 27 01:29:00.567: INFO: Pod pod-fe2c66d2-65a6-4c73-87fe-32188b6f03f9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:29:00.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3563" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1402,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:29:00.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0827 01:29:13.011950       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 01:29:13.011: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:29:13.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2002" for this suite.

• [SLOW TEST:12.754 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":91,"skipped":1406,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:29:13.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-ba3069ab-112d-4ede-abfe-841d478abdb6
STEP: Creating a pod to test consume configMaps
Aug 27 01:29:14.025: INFO: Waiting up to 5m0s for pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082" in namespace "configmap-1090" to be "success or failure"
Aug 27 01:29:14.208: INFO: Pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082": Phase="Pending", Reason="", readiness=false. Elapsed: 182.520917ms
Aug 27 01:29:16.328: INFO: Pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302531279s
Aug 27 01:29:18.332: INFO: Pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082": Phase="Running", Reason="", readiness=true. Elapsed: 4.306448572s
Aug 27 01:29:20.347: INFO: Pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321174292s
STEP: Saw pod success
Aug 27 01:29:20.347: INFO: Pod "pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082" satisfied condition "success or failure"
Aug 27 01:29:20.555: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082 container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:29:20.891: INFO: Waiting for pod pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082 to disappear
Aug 27 01:29:20.894: INFO: Pod pod-configmaps-a351efe1-060d-4213-a70e-f7e742e09082 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:29:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1090" for this suite.

• [SLOW TEST:7.882 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1409,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:29:21.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:29:22.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7570" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":93,"skipped":1431,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:29:22.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-edb1b42e-1259-4e0f-96c1-333f556d6f7a
STEP: Creating a pod to test consume configMaps
Aug 27 01:29:22.556: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517" in namespace "configmap-5469" to be "success or failure"
Aug 27 01:29:22.586: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 30.095649ms
Aug 27 01:29:24.633: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077130948s
Aug 27 01:29:26.663: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107129848s
Aug 27 01:29:28.772: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215537534s
Aug 27 01:29:30.788: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232027554s
Aug 27 01:29:32.945: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388917516s
Aug 27 01:29:35.474: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Pending", Reason="", readiness=false. Elapsed: 12.917925989s
Aug 27 01:29:37.694: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.137850909s
STEP: Saw pod success
Aug 27 01:29:37.694: INFO: Pod "pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517" satisfied condition "success or failure"
Aug 27 01:29:37.697: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517 container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:29:38.352: INFO: Waiting for pod pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517 to disappear
Aug 27 01:29:38.393: INFO: Pod pod-configmaps-2ecbd681-2148-4f38-bcea-57c3836e1517 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:29:38.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5469" for this suite.

• [SLOW TEST:16.141 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:29:38.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-a8c81f41-888f-4c9c-8601-44b51df67c83 in namespace container-probe-8446
Aug 27 01:29:45.226: INFO: Started pod test-webserver-a8c81f41-888f-4c9c-8601-44b51df67c83 in namespace container-probe-8446
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 01:29:45.229: INFO: Initial restart count of pod test-webserver-a8c81f41-888f-4c9c-8601-44b51df67c83 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:33:45.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8446" for this suite.

• [SLOW TEST:247.383 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1477,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:33:45.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:33:46.967: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:33:49.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088827, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:33:51.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088827, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734088826, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:33:55.187: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:33:55.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1432-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:33:56.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7137" for this suite.
STEP: Destroying namespace "webhook-7137-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.459 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":96,"skipped":1481,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:33:57.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:33:57.847: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9946
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 01:33:58.121: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 01:34:20.346: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.186:8080/dial?request=hostname&protocol=http&host=10.244.2.185&port=8080&tries=1'] Namespace:pod-network-test-9946 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:34:20.346: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:34:20.380255       6 log.go:172] (0xc002a4a4d0) (0xc00168e780) Create stream
I0827 01:34:20.380302       6 log.go:172] (0xc002a4a4d0) (0xc00168e780) Stream added, broadcasting: 1
I0827 01:34:20.382498       6 log.go:172] (0xc002a4a4d0) Reply frame received for 1
I0827 01:34:20.382550       6 log.go:172] (0xc002a4a4d0) (0xc0015cd180) Create stream
I0827 01:34:20.382565       6 log.go:172] (0xc002a4a4d0) (0xc0015cd180) Stream added, broadcasting: 3
I0827 01:34:20.383726       6 log.go:172] (0xc002a4a4d0) Reply frame received for 3
I0827 01:34:20.383784       6 log.go:172] (0xc002a4a4d0) (0xc0013b4280) Create stream
I0827 01:34:20.383811       6 log.go:172] (0xc002a4a4d0) (0xc0013b4280) Stream added, broadcasting: 5
I0827 01:34:20.388945       6 log.go:172] (0xc002a4a4d0) Reply frame received for 5
I0827 01:34:20.465253       6 log.go:172] (0xc002a4a4d0) Data frame received for 3
I0827 01:34:20.465289       6 log.go:172] (0xc0015cd180) (3) Data frame handling
I0827 01:34:20.465308       6 log.go:172] (0xc0015cd180) (3) Data frame sent
I0827 01:34:20.465988       6 log.go:172] (0xc002a4a4d0) Data frame received for 3
I0827 01:34:20.466011       6 log.go:172] (0xc0015cd180) (3) Data frame handling
I0827 01:34:20.466025       6 log.go:172] (0xc002a4a4d0) Data frame received for 5
I0827 01:34:20.466032       6 log.go:172] (0xc0013b4280) (5) Data frame handling
I0827 01:34:20.467949       6 log.go:172] (0xc002a4a4d0) Data frame received for 1
I0827 01:34:20.467985       6 log.go:172] (0xc00168e780) (1) Data frame handling
I0827 01:34:20.468021       6 log.go:172] (0xc00168e780) (1) Data frame sent
I0827 01:34:20.468046       6 log.go:172] (0xc002a4a4d0) (0xc00168e780) Stream removed, broadcasting: 1
I0827 01:34:20.468065       6 log.go:172] (0xc002a4a4d0) Go away received
I0827 01:34:20.468163       6 log.go:172] (0xc002a4a4d0) (0xc00168e780) Stream removed, broadcasting: 1
I0827 01:34:20.468190       6 log.go:172] (0xc002a4a4d0) (0xc0015cd180) Stream removed, broadcasting: 3
I0827 01:34:20.468200       6 log.go:172] (0xc002a4a4d0) (0xc0013b4280) Stream removed, broadcasting: 5
Aug 27 01:34:20.468: INFO: Waiting for responses: map[]
Aug 27 01:34:20.471: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.186:8080/dial?request=hostname&protocol=http&host=10.244.1.157&port=8080&tries=1'] Namespace:pod-network-test-9946 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:34:20.471: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:34:20.507268       6 log.go:172] (0xc003166d10) (0xc0015cdd60) Create stream
I0827 01:34:20.507302       6 log.go:172] (0xc003166d10) (0xc0015cdd60) Stream added, broadcasting: 1
I0827 01:34:20.509153       6 log.go:172] (0xc003166d10) Reply frame received for 1
I0827 01:34:20.509185       6 log.go:172] (0xc003166d10) (0xc0013b4500) Create stream
I0827 01:34:20.509195       6 log.go:172] (0xc003166d10) (0xc0013b4500) Stream added, broadcasting: 3
I0827 01:34:20.510216       6 log.go:172] (0xc003166d10) Reply frame received for 3
I0827 01:34:20.510257       6 log.go:172] (0xc003166d10) (0xc0013b45a0) Create stream
I0827 01:34:20.510272       6 log.go:172] (0xc003166d10) (0xc0013b45a0) Stream added, broadcasting: 5
I0827 01:34:20.511038       6 log.go:172] (0xc003166d10) Reply frame received for 5
I0827 01:34:20.580035       6 log.go:172] (0xc003166d10) Data frame received for 3
I0827 01:34:20.580058       6 log.go:172] (0xc0013b4500) (3) Data frame handling
I0827 01:34:20.580073       6 log.go:172] (0xc0013b4500) (3) Data frame sent
I0827 01:34:20.580680       6 log.go:172] (0xc003166d10) Data frame received for 3
I0827 01:34:20.580692       6 log.go:172] (0xc0013b4500) (3) Data frame handling
I0827 01:34:20.580902       6 log.go:172] (0xc003166d10) Data frame received for 5
I0827 01:34:20.580921       6 log.go:172] (0xc0013b45a0) (5) Data frame handling
I0827 01:34:20.582300       6 log.go:172] (0xc003166d10) Data frame received for 1
I0827 01:34:20.582318       6 log.go:172] (0xc0015cdd60) (1) Data frame handling
I0827 01:34:20.582334       6 log.go:172] (0xc0015cdd60) (1) Data frame sent
I0827 01:34:20.582348       6 log.go:172] (0xc003166d10) (0xc0015cdd60) Stream removed, broadcasting: 1
I0827 01:34:20.582428       6 log.go:172] (0xc003166d10) (0xc0015cdd60) Stream removed, broadcasting: 1
I0827 01:34:20.582443       6 log.go:172] (0xc003166d10) (0xc0013b4500) Stream removed, broadcasting: 3
I0827 01:34:20.582450       6 log.go:172] (0xc003166d10) (0xc0013b45a0) Stream removed, broadcasting: 5
Aug 27 01:34:20.582: INFO: Waiting for responses: map[]
I0827 01:34:20.582497       6 log.go:172] (0xc003166d10) Go away received
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:34:20.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9946" for this suite.

• [SLOW TEST:22.657 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1520,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:34:20.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:34:20.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd" in namespace "downward-api-8495" to be "success or failure"
Aug 27 01:34:20.704: INFO: Pod "downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.562156ms
Aug 27 01:34:22.708: INFO: Pod "downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016315136s
Aug 27 01:34:24.712: INFO: Pod "downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020388107s
STEP: Saw pod success
Aug 27 01:34:24.712: INFO: Pod "downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd" satisfied condition "success or failure"
Aug 27 01:34:24.714: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd container client-container: 
STEP: delete the pod
Aug 27 01:34:24.747: INFO: Waiting for pod downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd to disappear
Aug 27 01:34:24.752: INFO: Pod downwardapi-volume-1797c69f-8c9d-4613-ae46-f623dddc92fd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:34:24.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8495" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1539,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:34:24.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-5bf4ac3f-2552-4c0d-96bc-be4f9c83fba1 in namespace container-probe-7469
Aug 27 01:34:30.882: INFO: Started pod busybox-5bf4ac3f-2552-4c0d-96bc-be4f9c83fba1 in namespace container-probe-7469
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 01:34:30.885: INFO: Initial restart count of pod busybox-5bf4ac3f-2552-4c0d-96bc-be4f9c83fba1 is 0
Aug 27 01:35:29.879: INFO: Restart count of pod container-probe-7469/busybox-5bf4ac3f-2552-4c0d-96bc-be4f9c83fba1 is now 1 (58.993512702s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:35:29.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7469" for this suite.

• [SLOW TEST:65.258 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1568,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:35:30.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-7975f5f4-d4cf-4c3e-94f0-95f6863841db
STEP: Creating a pod to test consume configMaps
Aug 27 01:35:30.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790" in namespace "configmap-1292" to be "success or failure"
Aug 27 01:35:30.352: INFO: Pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591905ms
Aug 27 01:35:32.356: INFO: Pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008046938s
Aug 27 01:35:34.407: INFO: Pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058722959s
Aug 27 01:35:36.410: INFO: Pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062583668s
STEP: Saw pod success
Aug 27 01:35:36.410: INFO: Pod "pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790" satisfied condition "success or failure"
Aug 27 01:35:36.413: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790 container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:35:36.811: INFO: Waiting for pod pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790 to disappear
Aug 27 01:35:36.856: INFO: Pod pod-configmaps-a1b4b599-c7c6-4310-afd4-fc074f235790 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:35:36.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1292" for this suite.

• [SLOW TEST:6.824 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1574,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:35:36.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-0e2c737c-bd88-4849-b344-c0c6efa62da7
STEP: Creating a pod to test consume secrets
Aug 27 01:35:37.447: INFO: Waiting up to 5m0s for pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b" in namespace "secrets-469" to be "success or failure"
Aug 27 01:35:37.496: INFO: Pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.884552ms
Aug 27 01:35:39.499: INFO: Pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051982812s
Aug 27 01:35:41.863: INFO: Pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41652514s
Aug 27 01:35:43.866: INFO: Pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.41953899s
STEP: Saw pod success
Aug 27 01:35:43.866: INFO: Pod "pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b" satisfied condition "success or failure"
Aug 27 01:35:43.868: INFO: Trying to get logs from node jerma-worker pod pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b container secret-volume-test: 
STEP: delete the pod
Aug 27 01:35:43.886: INFO: Waiting for pod pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b to disappear
Aug 27 01:35:43.890: INFO: Pod pod-secrets-eb9de6bf-d84f-4ae7-a46b-115e2fbe068b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:35:43.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-469" for this suite.

• [SLOW TEST:7.033 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1596,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:35:43.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:36:04.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5811" for this suite.

• [SLOW TEST:20.200 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":103,"skipped":1597,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:36:04.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:36:09.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9165" for this suite.

• [SLOW TEST:5.252 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1605,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:36:09.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:36:09.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda" in namespace "projected-5876" to be "success or failure"
Aug 27 01:36:09.522: INFO: Pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003552ms
Aug 27 01:36:11.711: INFO: Pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193843046s
Aug 27 01:36:13.870: INFO: Pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352372527s
Aug 27 01:36:15.906: INFO: Pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388159458s
STEP: Saw pod success
Aug 27 01:36:15.906: INFO: Pod "downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda" satisfied condition "success or failure"
Aug 27 01:36:15.908: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda container client-container: 
STEP: delete the pod
Aug 27 01:36:15.989: INFO: Waiting for pod downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda to disappear
Aug 27 01:36:16.050: INFO: Pod downwardapi-volume-945b565e-1398-4fe1-bb10-4cde29874fda no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:36:16.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5876" for this suite.

• [SLOW TEST:6.709 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:36:16.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:36:16.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514" in namespace "projected-6527" to be "success or failure"
Aug 27 01:36:16.974: INFO: Pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514": Phase="Pending", Reason="", readiness=false. Elapsed: 492.36232ms
Aug 27 01:36:18.977: INFO: Pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494850962s
Aug 27 01:36:20.981: INFO: Pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498843765s
Aug 27 01:36:22.984: INFO: Pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.502422327s
STEP: Saw pod success
Aug 27 01:36:22.984: INFO: Pod "downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514" satisfied condition "success or failure"
Aug 27 01:36:22.987: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514 container client-container: 
STEP: delete the pod
Aug 27 01:36:23.025: INFO: Waiting for pod downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514 to disappear
Aug 27 01:36:23.235: INFO: Pod downwardapi-volume-4f02f6f1-ea86-4782-830d-ef43478ed514 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:36:23.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6527" for this suite.

• [SLOW TEST:7.185 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:36:23.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9571
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-9571
I0827 01:36:27.631723       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9571, replica count: 2
I0827 01:36:30.682142       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:36:33.682428       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:36:36.682622       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 01:36:39.682830       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 01:36:39.682: INFO: Creating new exec pod
Aug 27 01:36:52.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9571 execpodbmflj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 27 01:36:58.304: INFO: stderr: "I0827 01:36:58.219343    1035 log.go:172] (0xc000105760) (0xc0008ac0a0) Create stream\nI0827 01:36:58.219379    1035 log.go:172] (0xc000105760) (0xc0008ac0a0) Stream added, broadcasting: 1\nI0827 01:36:58.222028    1035 log.go:172] (0xc000105760) Reply frame received for 1\nI0827 01:36:58.222077    1035 log.go:172] (0xc000105760) (0xc000836000) Create stream\nI0827 01:36:58.222092    1035 log.go:172] (0xc000105760) (0xc000836000) Stream added, broadcasting: 3\nI0827 01:36:58.223065    1035 log.go:172] (0xc000105760) Reply frame received for 3\nI0827 01:36:58.223098    1035 log.go:172] (0xc000105760) (0xc0008ac140) Create stream\nI0827 01:36:58.223108    1035 log.go:172] (0xc000105760) (0xc0008ac140) Stream added, broadcasting: 5\nI0827 01:36:58.223983    1035 log.go:172] (0xc000105760) Reply frame received for 5\nI0827 01:36:58.290760    1035 log.go:172] (0xc000105760) Data frame received for 3\nI0827 01:36:58.290810    1035 log.go:172] (0xc000836000) (3) Data frame handling\nI0827 01:36:58.290881    1035 log.go:172] (0xc000105760) Data frame received for 5\nI0827 01:36:58.290909    1035 log.go:172] (0xc0008ac140) (5) Data frame handling\nI0827 01:36:58.290927    1035 log.go:172] (0xc0008ac140) (5) Data frame sent\nI0827 01:36:58.290946    1035 log.go:172] (0xc000105760) Data frame received for 5\nI0827 01:36:58.290967    1035 log.go:172] (0xc0008ac140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0827 01:36:58.292293    1035 log.go:172] (0xc000105760) Data frame received for 1\nI0827 01:36:58.292320    1035 log.go:172] (0xc0008ac0a0) (1) Data frame handling\nI0827 01:36:58.292335    1035 log.go:172] (0xc0008ac0a0) (1) Data frame sent\nI0827 01:36:58.292347    1035 log.go:172] (0xc000105760) (0xc0008ac0a0) Stream removed, broadcasting: 1\nI0827 01:36:58.292370    1035 log.go:172] (0xc000105760) Go away received\nI0827 01:36:58.293031    1035 log.go:172] (0xc000105760) (0xc0008ac0a0) Stream removed, broadcasting: 1\nI0827 01:36:58.293069    1035 log.go:172] (0xc000105760) (0xc000836000) Stream removed, broadcasting: 3\nI0827 01:36:58.293090    1035 log.go:172] (0xc000105760) (0xc0008ac140) Stream removed, broadcasting: 5\n"
Aug 27 01:36:58.304: INFO: stdout: ""
Aug 27 01:36:58.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9571 execpodbmflj -- /bin/sh -x -c nc -zv -t -w 2 10.107.89.251 80'
Aug 27 01:36:58.512: INFO: stderr: "I0827 01:36:58.436483    1067 log.go:172] (0xc000b9c0b0) (0xc000900000) Create stream\nI0827 01:36:58.436548    1067 log.go:172] (0xc000b9c0b0) (0xc000900000) Stream added, broadcasting: 1\nI0827 01:36:58.439050    1067 log.go:172] (0xc000b9c0b0) Reply frame received for 1\nI0827 01:36:58.439103    1067 log.go:172] (0xc000b9c0b0) (0xc00093c000) Create stream\nI0827 01:36:58.439118    1067 log.go:172] (0xc000b9c0b0) (0xc00093c000) Stream added, broadcasting: 3\nI0827 01:36:58.439739    1067 log.go:172] (0xc000b9c0b0) Reply frame received for 3\nI0827 01:36:58.439768    1067 log.go:172] (0xc000b9c0b0) (0xc0006b99a0) Create stream\nI0827 01:36:58.439774    1067 log.go:172] (0xc000b9c0b0) (0xc0006b99a0) Stream added, broadcasting: 5\nI0827 01:36:58.440418    1067 log.go:172] (0xc000b9c0b0) Reply frame received for 5\nI0827 01:36:58.502383    1067 log.go:172] (0xc000b9c0b0) Data frame received for 3\nI0827 01:36:58.502429    1067 log.go:172] (0xc00093c000) (3) Data frame handling\nI0827 01:36:58.502451    1067 log.go:172] (0xc000b9c0b0) Data frame received for 5\nI0827 01:36:58.502461    1067 log.go:172] (0xc0006b99a0) (5) Data frame handling\nI0827 01:36:58.502473    1067 log.go:172] (0xc0006b99a0) (5) Data frame sent\nI0827 01:36:58.502487    1067 log.go:172] (0xc000b9c0b0) Data frame received for 5\nI0827 01:36:58.502530    1067 log.go:172] (0xc0006b99a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.89.251 80\nConnection to 10.107.89.251 80 port [tcp/http] succeeded!\nI0827 01:36:58.503929    1067 log.go:172] (0xc000b9c0b0) Data frame received for 1\nI0827 01:36:58.503964    1067 log.go:172] (0xc000900000) (1) Data frame handling\nI0827 01:36:58.503994    1067 log.go:172] (0xc000900000) (1) Data frame sent\nI0827 01:36:58.504018    1067 log.go:172] (0xc000b9c0b0) (0xc000900000) Stream removed, broadcasting: 1\nI0827 01:36:58.504037    1067 log.go:172] (0xc000b9c0b0) Go away received\nI0827 01:36:58.504558    1067 log.go:172] (0xc000b9c0b0) (0xc000900000) Stream removed, broadcasting: 1\nI0827 01:36:58.504581    1067 log.go:172] (0xc000b9c0b0) (0xc00093c000) Stream removed, broadcasting: 3\nI0827 01:36:58.504593    1067 log.go:172] (0xc000b9c0b0) (0xc0006b99a0) Stream removed, broadcasting: 5\n"
Aug 27 01:36:58.512: INFO: stdout: ""
Aug 27 01:36:58.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9571 execpodbmflj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 32101'
Aug 27 01:36:58.728: INFO: stderr: "I0827 01:36:58.647958    1086 log.go:172] (0xc000a520b0) (0xc000a6c000) Create stream\nI0827 01:36:58.648019    1086 log.go:172] (0xc000a520b0) (0xc000a6c000) Stream added, broadcasting: 1\nI0827 01:36:58.651432    1086 log.go:172] (0xc000a520b0) Reply frame received for 1\nI0827 01:36:58.651483    1086 log.go:172] (0xc000a520b0) (0xc00068ba40) Create stream\nI0827 01:36:58.651498    1086 log.go:172] (0xc000a520b0) (0xc00068ba40) Stream added, broadcasting: 3\nI0827 01:36:58.652634    1086 log.go:172] (0xc000a520b0) Reply frame received for 3\nI0827 01:36:58.652682    1086 log.go:172] (0xc000a520b0) (0xc0001cc000) Create stream\nI0827 01:36:58.652708    1086 log.go:172] (0xc000a520b0) (0xc0001cc000) Stream added, broadcasting: 5\nI0827 01:36:58.653798    1086 log.go:172] (0xc000a520b0) Reply frame received for 5\nI0827 01:36:58.718600    1086 log.go:172] (0xc000a520b0) Data frame received for 5\nI0827 01:36:58.718628    1086 log.go:172] (0xc0001cc000) (5) Data frame handling\nI0827 01:36:58.718643    1086 log.go:172] (0xc0001cc000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 32101\nI0827 01:36:58.718736    1086 log.go:172] (0xc000a520b0) Data frame received for 5\nI0827 01:36:58.718750    1086 log.go:172] (0xc0001cc000) (5) Data frame handling\nI0827 01:36:58.718766    1086 log.go:172] (0xc0001cc000) (5) Data frame sent\nConnection to 172.18.0.6 32101 port [tcp/32101] succeeded!\nI0827 01:36:58.719264    1086 log.go:172] (0xc000a520b0) Data frame received for 3\nI0827 01:36:58.719298    1086 log.go:172] (0xc00068ba40) (3) Data frame handling\nI0827 01:36:58.719331    1086 log.go:172] (0xc000a520b0) Data frame received for 5\nI0827 01:36:58.719343    1086 log.go:172] (0xc0001cc000) (5) Data frame handling\nI0827 01:36:58.721104    1086 log.go:172] (0xc000a520b0) Data frame received for 1\nI0827 01:36:58.721137    1086 log.go:172] (0xc000a6c000) (1) Data frame handling\nI0827 01:36:58.721150    1086 log.go:172] (0xc000a6c000) (1) Data frame sent\nI0827 01:36:58.721159    1086 log.go:172] (0xc000a520b0) (0xc000a6c000) Stream removed, broadcasting: 1\nI0827 01:36:58.721177    1086 log.go:172] (0xc000a520b0) Go away received\nI0827 01:36:58.721579    1086 log.go:172] (0xc000a520b0) (0xc000a6c000) Stream removed, broadcasting: 1\nI0827 01:36:58.721595    1086 log.go:172] (0xc000a520b0) (0xc00068ba40) Stream removed, broadcasting: 3\nI0827 01:36:58.721602    1086 log.go:172] (0xc000a520b0) (0xc0001cc000) Stream removed, broadcasting: 5\n"
Aug 27 01:36:58.728: INFO: stdout: ""
Aug 27 01:36:58.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9571 execpodbmflj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.3 32101'
Aug 27 01:36:58.954: INFO: stderr: "I0827 01:36:58.874426    1106 log.go:172] (0xc000aa53f0) (0xc000b28780) Create stream\nI0827 01:36:58.874480    1106 log.go:172] (0xc000aa53f0) (0xc000b28780) Stream added, broadcasting: 1\nI0827 01:36:58.879046    1106 log.go:172] (0xc000aa53f0) Reply frame received for 1\nI0827 01:36:58.879096    1106 log.go:172] (0xc000aa53f0) (0xc0007226e0) Create stream\nI0827 01:36:58.879108    1106 log.go:172] (0xc000aa53f0) (0xc0007226e0) Stream added, broadcasting: 3\nI0827 01:36:58.880307    1106 log.go:172] (0xc000aa53f0) Reply frame received for 3\nI0827 01:36:58.880351    1106 log.go:172] (0xc000aa53f0) (0xc0005314a0) Create stream\nI0827 01:36:58.880362    1106 log.go:172] (0xc000aa53f0) (0xc0005314a0) Stream added, broadcasting: 5\nI0827 01:36:58.881423    1106 log.go:172] (0xc000aa53f0) Reply frame received for 5\nI0827 01:36:58.940142    1106 log.go:172] (0xc000aa53f0) Data frame received for 5\nI0827 01:36:58.940167    1106 log.go:172] (0xc0005314a0) (5) Data frame handling\nI0827 01:36:58.940182    1106 log.go:172] (0xc0005314a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.3 32101\nConnection to 172.18.0.3 32101 port [tcp/32101] succeeded!\nI0827 01:36:58.940212    1106 log.go:172] (0xc000aa53f0) Data frame received for 3\nI0827 01:36:58.940243    1106 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0827 01:36:58.940351    1106 log.go:172] (0xc000aa53f0) Data frame received for 5\nI0827 01:36:58.940396    1106 log.go:172] (0xc0005314a0) (5) Data frame handling\nI0827 01:36:58.942059    1106 log.go:172] (0xc000aa53f0) Data frame received for 1\nI0827 01:36:58.942078    1106 log.go:172] (0xc000b28780) (1) Data frame handling\nI0827 01:36:58.942089    1106 log.go:172] (0xc000b28780) (1) Data frame sent\nI0827 01:36:58.942103    1106 log.go:172] (0xc000aa53f0) (0xc000b28780) Stream removed, broadcasting: 1\nI0827 01:36:58.942118    1106 log.go:172] (0xc000aa53f0) Go away received\nI0827 01:36:58.942576    1106 log.go:172] (0xc000aa53f0) (0xc000b28780) Stream removed, broadcasting: 1\nI0827 01:36:58.942590    1106 log.go:172] (0xc000aa53f0) (0xc0007226e0) Stream removed, broadcasting: 3\nI0827 01:36:58.942598    1106 log.go:172] (0xc000aa53f0) (0xc0005314a0) Stream removed, broadcasting: 5\n"
Aug 27 01:36:58.954: INFO: stdout: ""
Aug 27 01:36:58.954: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:36:59.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9571" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:35.766 seconds]
[sig-network] Services
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":107,"skipped":1662,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:36:59.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:36:59.410: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:37:01.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089019, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089019, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089019, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089019, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:37:04.500: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:37:04.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:37:05.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8805" for this suite.
STEP: Destroying namespace "webhook-8805-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.015 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":108,"skipped":1663,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:37:06.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 27 01:37:06.071: INFO: >>> kubeConfig: /root/.kube/config
Aug 27 01:37:09.052: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:37:19.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2521" for this suite.

• [SLOW TEST:13.582 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":109,"skipped":1665,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:37:19.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 01:37:26.476: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:37:26.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7368" for this suite.

• [SLOW TEST:7.279 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1667,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:37:26.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 27 01:37:27.553: INFO: Waiting up to 5m0s for pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e" in namespace "emptydir-7886" to be "success or failure"
Aug 27 01:37:27.601: INFO: Pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.898095ms
Aug 27 01:37:29.605: INFO: Pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051690827s
Aug 27 01:37:31.667: INFO: Pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113573694s
Aug 27 01:37:33.670: INFO: Pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116924583s
STEP: Saw pod success
Aug 27 01:37:33.670: INFO: Pod "pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e" satisfied condition "success or failure"
Aug 27 01:37:33.673: INFO: Trying to get logs from node jerma-worker pod pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e container test-container: 
STEP: delete the pod
Aug 27 01:37:33.919: INFO: Waiting for pod pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e to disappear
Aug 27 01:37:33.984: INFO: Pod pod-2d057ff3-6be2-45cc-90cd-eabc81a5051e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:37:33.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7886" for this suite.

• [SLOW TEST:7.103 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:37:33.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 27 01:37:34.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4893'
Aug 27 01:37:34.367: INFO: stderr: ""
Aug 27 01:37:34.367: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 01:37:34.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4893'
Aug 27 01:37:34.491: INFO: stderr: ""
Aug 27 01:37:34.491: INFO: stdout: "update-demo-nautilus-gfvqh update-demo-nautilus-t5dv7 "
Aug 27 01:37:34.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfvqh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:34.596: INFO: stderr: ""
Aug 27 01:37:34.596: INFO: stdout: ""
Aug 27 01:37:34.596: INFO: update-demo-nautilus-gfvqh is created but not running
Aug 27 01:37:39.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4893'
Aug 27 01:37:39.726: INFO: stderr: ""
Aug 27 01:37:39.726: INFO: stdout: "update-demo-nautilus-gfvqh update-demo-nautilus-t5dv7 "
Aug 27 01:37:39.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfvqh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:39.816: INFO: stderr: ""
Aug 27 01:37:39.816: INFO: stdout: ""
Aug 27 01:37:39.816: INFO: update-demo-nautilus-gfvqh is created but not running
Aug 27 01:37:44.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4893'
Aug 27 01:37:44.928: INFO: stderr: ""
Aug 27 01:37:44.928: INFO: stdout: "update-demo-nautilus-gfvqh update-demo-nautilus-t5dv7 "
Aug 27 01:37:44.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfvqh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:45.021: INFO: stderr: ""
Aug 27 01:37:45.021: INFO: stdout: "true"
Aug 27 01:37:45.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gfvqh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:45.117: INFO: stderr: ""
Aug 27 01:37:45.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 01:37:45.117: INFO: validating pod update-demo-nautilus-gfvqh
Aug 27 01:37:45.121: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 01:37:45.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 01:37:45.121: INFO: update-demo-nautilus-gfvqh is verified up and running
Aug 27 01:37:45.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5dv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:45.207: INFO: stderr: ""
Aug 27 01:37:45.207: INFO: stdout: "true"
Aug 27 01:37:45.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t5dv7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4893'
Aug 27 01:37:45.318: INFO: stderr: ""
Aug 27 01:37:45.318: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 01:37:45.318: INFO: validating pod update-demo-nautilus-t5dv7
Aug 27 01:37:45.322: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 01:37:45.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 01:37:45.322: INFO: update-demo-nautilus-t5dv7 is verified up and running
STEP: using delete to clean up resources
Aug 27 01:37:45.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4893'
Aug 27 01:37:45.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:37:45.417: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 01:37:45.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4893'
Aug 27 01:37:45.538: INFO: stderr: "No resources found in kubectl-4893 namespace.\n"
Aug 27 01:37:45.538: INFO: stdout: ""
Aug 27 01:37:45.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4893 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 01:37:45.636: INFO: stderr: ""
Aug 27 01:37:45.636: INFO: stdout: "update-demo-nautilus-gfvqh\nupdate-demo-nautilus-t5dv7\n"
Aug 27 01:37:46.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4893'
Aug 27 01:37:46.240: INFO: stderr: "No resources found in kubectl-4893 namespace.\n"
Aug 27 01:37:46.240: INFO: stdout: ""
Aug 27 01:37:46.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4893 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 01:37:46.465: INFO: stderr: ""
Aug 27 01:37:46.465: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:37:46.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4893" for this suite.

• [SLOW TEST:12.553 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":112,"skipped":1815,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:37:46.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:38:39.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9475" for this suite.

• [SLOW TEST:53.071 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1825,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:38:39.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:38:41.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:38:43.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089121, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:38:46.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089121, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:38:48.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089121, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:38:49.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089122, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089121, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:38:52.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 27 01:38:52.994: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:38:53.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3249" for this suite.
STEP: Destroying namespace "webhook-3249-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.296 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":114,"skipped":1834,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:38:56.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Aug 27 01:38:58.426: INFO: Waiting up to 5m0s for pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06" in namespace "containers-7247" to be "success or failure"
Aug 27 01:38:58.437: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06": Phase="Pending", Reason="", readiness=false. Elapsed: 10.446043ms
Aug 27 01:39:00.639: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212661906s
Aug 27 01:39:03.264: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.837355156s
Aug 27 01:39:05.835: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06": Phase="Running", Reason="", readiness=true. Elapsed: 7.40932391s
Aug 27 01:39:08.315: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.888683849s
STEP: Saw pod success
Aug 27 01:39:08.315: INFO: Pod "client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06" satisfied condition "success or failure"
Aug 27 01:39:08.318: INFO: Trying to get logs from node jerma-worker pod client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06 container test-container: 
STEP: delete the pod
Aug 27 01:39:09.322: INFO: Waiting for pod client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06 to disappear
Aug 27 01:39:09.386: INFO: Pod client-containers-cd0c9266-f5eb-4f2b-8449-6cddc9b51e06 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:39:09.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7247" for this suite.

• [SLOW TEST:12.479 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1838,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:39:09.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 27 01:39:18.523: INFO: Successfully updated pod "annotationupdatec2038bac-b414-4976-aa34-6291c91965d1"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:39:19.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1206" for this suite.

• [SLOW TEST:10.944 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1849,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:39:20.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Aug 27 01:39:22.269: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 27 01:39:22.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:24.111: INFO: stderr: ""
Aug 27 01:39:24.111: INFO: stdout: "service/agnhost-slave created\n"
Aug 27 01:39:24.111: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 27 01:39:24.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:25.451: INFO: stderr: ""
Aug 27 01:39:25.451: INFO: stdout: "service/agnhost-master created\n"
Aug 27 01:39:25.451: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 27 01:39:25.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:27.293: INFO: stderr: ""
Aug 27 01:39:27.293: INFO: stdout: "service/frontend created\n"
Aug 27 01:39:27.294: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 27 01:39:27.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:28.125: INFO: stderr: ""
Aug 27 01:39:28.125: INFO: stdout: "deployment.apps/frontend created\n"
Aug 27 01:39:28.125: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 27 01:39:28.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:28.767: INFO: stderr: ""
Aug 27 01:39:28.768: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 27 01:39:28.768: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 27 01:39:28.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9484'
Aug 27 01:39:29.853: INFO: stderr: ""
Aug 27 01:39:29.853: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 27 01:39:29.853: INFO: Waiting for all frontend pods to be Running.
Aug 27 01:39:44.904: INFO: Waiting for frontend to serve content.
Aug 27 01:39:44.915: INFO: Trying to add a new entry to the guestbook.
Aug 27 01:39:44.926: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 27 01:39:44.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:46.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:46.019: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 01:39:46.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:47.479: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:47.479: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 01:39:47.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:48.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:48.134: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 01:39:48.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:49.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:49.028: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 01:39:49.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:49.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:49.326: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 27 01:39:49.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9484'
Aug 27 01:39:50.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 01:39:50.019: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:39:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9484" for this suite.

• [SLOW TEST:29.689 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":117,"skipped":1880,"failed":0}
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:39:50.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-efbf6548-89c3-4b70-9eff-868dc8a7d5aa
STEP: Creating a pod to test consume secrets
Aug 27 01:39:57.569: INFO: Waiting up to 5m0s for pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777" in namespace "secrets-5822" to be "success or failure"
Aug 27 01:39:58.650: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Pending", Reason="", readiness=false. Elapsed: 1.081480925s
Aug 27 01:40:00.849: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Pending", Reason="", readiness=false. Elapsed: 3.279883503s
Aug 27 01:40:03.137: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Pending", Reason="", readiness=false. Elapsed: 5.568282801s
Aug 27 01:40:05.519: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Pending", Reason="", readiness=false. Elapsed: 7.950049223s
Aug 27 01:40:07.567: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Running", Reason="", readiness=true. Elapsed: 9.998182515s
Aug 27 01:40:09.570: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.00101841s
STEP: Saw pod success
Aug 27 01:40:09.570: INFO: Pod "pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777" satisfied condition "success or failure"
Aug 27 01:40:09.572: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777 container secret-volume-test: 
STEP: delete the pod
Aug 27 01:40:09.748: INFO: Waiting for pod pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777 to disappear
Aug 27 01:40:09.798: INFO: Pod pod-secrets-9a46e489-e21d-432d-80b3-51c5a5eb4777 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:40:09.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5822" for this suite.
STEP: Destroying namespace "secret-namespace-5389" for this suite.

• [SLOW TEST:19.898 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1880,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:40:09.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 01:40:10.090: INFO: Waiting up to 5m0s for pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f" in namespace "downward-api-134" to be "success or failure"
Aug 27 01:40:10.116: INFO: Pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.017046ms
Aug 27 01:40:12.341: INFO: Pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250126479s
Aug 27 01:40:14.345: INFO: Pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254535496s
Aug 27 01:40:16.666: INFO: Pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57514884s
STEP: Saw pod success
Aug 27 01:40:16.666: INFO: Pod "downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f" satisfied condition "success or failure"
Aug 27 01:40:16.700: INFO: Trying to get logs from node jerma-worker2 pod downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f container dapi-container: 
STEP: delete the pod
Aug 27 01:40:16.918: INFO: Waiting for pod downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f to disappear
Aug 27 01:40:17.004: INFO: Pod downward-api-35ed97e0-6ea2-4806-8144-0be37140b66f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:40:17.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-134" for this suite.

• [SLOW TEST:7.102 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1884,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:40:17.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 27 01:40:17.588: INFO: Waiting up to 5m0s for pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd" in namespace "emptydir-4135" to be "success or failure"
Aug 27 01:40:17.729: INFO: Pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 140.444505ms
Aug 27 01:40:19.733: INFO: Pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144833051s
Aug 27 01:40:21.736: INFO: Pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14825086s
Aug 27 01:40:23.739: INFO: Pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151376075s
STEP: Saw pod success
Aug 27 01:40:23.740: INFO: Pod "pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd" satisfied condition "success or failure"
Aug 27 01:40:23.742: INFO: Trying to get logs from node jerma-worker pod pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd container test-container: 
STEP: delete the pod
Aug 27 01:40:23.798: INFO: Waiting for pod pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd to disappear
Aug 27 01:40:23.986: INFO: Pod pod-66d4c8b7-4f58-4d04-ba20-ae2b2c7b2cdd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:40:23.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4135" for this suite.

• [SLOW TEST:6.965 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1901,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:40:23.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:40:25.778: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:40:27.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:40:29.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:40:31.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089225, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:40:35.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:40:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-955" for this suite.
STEP: Destroying namespace "webhook-955-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.416 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":121,"skipped":1934,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:40:36.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:40:59.601: INFO: DNS probes using dns-test-1f9deba5-356c-4611-aeb2-327233a7518b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:41:10.973: INFO: File wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:10.976: INFO: File jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:10.976: INFO: Lookups using dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 failed for: [wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local]

Aug 27 01:41:15.981: INFO: File wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:15.985: INFO: File jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:15.986: INFO: Lookups using dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 failed for: [wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local]

Aug 27 01:41:20.981: INFO: File wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:20.985: INFO: File jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local from pod  dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 27 01:41:20.985: INFO: Lookups using dns-4105/dns-test-02de4e76-1863-40fc-89e9-046788b4df66 failed for: [wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local]

Aug 27 01:41:25.985: INFO: DNS probes using dns-test-02de4e76-1863-40fc-89e9-046788b4df66 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4105.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4105.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:41:45.575: INFO: DNS probes using dns-test-6603eeb7-4d6d-44c9-9eab-e30bd56df92c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:41:46.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4105" for this suite.

• [SLOW TEST:70.199 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":122,"skipped":1947,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:41:46.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:41:55.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3040" for this suite.

• [SLOW TEST:8.484 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":123,"skipped":1953,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:41:55.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 01:41:55.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8910'
Aug 27 01:41:55.510: INFO: stderr: ""
Aug 27 01:41:55.510: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Aug 27 01:41:55.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8910'
Aug 27 01:42:02.019: INFO: stderr: ""
Aug 27 01:42:02.019: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:42:02.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8910" for this suite.

• [SLOW TEST:6.988 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":124,"skipped":1990,"failed":0}
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:42:02.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 27 01:42:02.211: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 27 01:42:07.304: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:42:08.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7127" for this suite.

• [SLOW TEST:6.797 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":125,"skipped":1990,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:42:08.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:42:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4583" for this suite.

• [SLOW TEST:11.717 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":126,"skipped":2018,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:42:20.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:42:20.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2393" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2021,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:42:21.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-d061378c-bd44-44b5-993a-1d342fffdb67 in namespace container-probe-5137
Aug 27 01:42:27.367: INFO: Started pod liveness-d061378c-bd44-44b5-993a-1d342fffdb67 in namespace container-probe-5137
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 01:42:27.370: INFO: Initial restart count of pod liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is 0
Aug 27 01:42:43.777: INFO: Restart count of pod container-probe-5137/liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is now 1 (16.406602176s elapsed)
Aug 27 01:42:59.962: INFO: Restart count of pod container-probe-5137/liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is now 2 (32.591451372s elapsed)
Aug 27 01:43:20.286: INFO: Restart count of pod container-probe-5137/liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is now 3 (52.916033246s elapsed)
Aug 27 01:43:40.327: INFO: Restart count of pod container-probe-5137/liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is now 4 (1m12.957130903s elapsed)
Aug 27 01:44:50.916: INFO: Restart count of pod container-probe-5137/liveness-d061378c-bd44-44b5-993a-1d342fffdb67 is now 5 (2m23.545597692s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:44:50.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5137" for this suite.

• [SLOW TEST:149.978 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2043,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:44:50.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-564c9afd-172e-4927-ac29-c23c965571af
STEP: Creating configMap with name cm-test-opt-upd-c0f956e9-a94c-49b9-bf2f-b7baa10ff61d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-564c9afd-172e-4927-ac29-c23c965571af
STEP: Updating configmap cm-test-opt-upd-c0f956e9-a94c-49b9-bf2f-b7baa10ff61d
STEP: Creating configMap with name cm-test-opt-create-a67fe7d2-d641-4260-a715-3e62e09cb939
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:46:19.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4028" for this suite.

• [SLOW TEST:88.797 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2044,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:46:19.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 27 01:46:19.856: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Aug 27 01:46:20.461: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 27 01:46:23.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:46:25.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089580, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:46:28.031: INFO: Waited 606.534156ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:46:33.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1621" for this suite.

• [SLOW TEST:13.913 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":130,"skipped":2052,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:46:33.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 01:46:34.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:34.742: INFO: Number of nodes with available pods: 0
Aug 27 01:46:34.742: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:46:35.776: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:35.779: INFO: Number of nodes with available pods: 0
Aug 27 01:46:35.779: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:46:36.769: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:36.777: INFO: Number of nodes with available pods: 0
Aug 27 01:46:36.777: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:46:37.766: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:37.798: INFO: Number of nodes with available pods: 0
Aug 27 01:46:37.798: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:46:38.782: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:38.786: INFO: Number of nodes with available pods: 0
Aug 27 01:46:38.786: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:46:39.763: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:39.768: INFO: Number of nodes with available pods: 1
Aug 27 01:46:39.768: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 01:46:40.747: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:40.751: INFO: Number of nodes with available pods: 2
Aug 27 01:46:40.751: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 27 01:46:40.795: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:46:40.810: INFO: Number of nodes with available pods: 2
Aug 27 01:46:40.810: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6840, will wait for the garbage collector to delete the pods
Aug 27 01:46:42.464: INFO: Deleting DaemonSet.extensions daemon-set took: 169.433205ms
Aug 27 01:46:42.764: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.495622ms
Aug 27 01:46:51.668: INFO: Number of nodes with available pods: 0
Aug 27 01:46:51.668: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 01:46:51.675: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6840/daemonsets","resourceVersion":"4087835"},"items":null}

Aug 27 01:46:51.678: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6840/pods","resourceVersion":"4087835"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:46:51.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6840" for this suite.

• [SLOW TEST:18.000 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":131,"skipped":2055,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:46:51.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 27 01:46:56.349: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9161 pod-service-account-160f6982-f6f1-4482-a8c5-193b8b34af15 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 27 01:46:56.756: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9161 pod-service-account-160f6982-f6f1-4482-a8c5-193b8b34af15 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 27 01:46:59.080: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9161 pod-service-account-160f6982-f6f1-4482-a8c5-193b8b34af15 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:00.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9161" for this suite.

• [SLOW TEST:8.859 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":132,"skipped":2086,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:00.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 27 01:47:00.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 27 01:47:11.079: INFO: >>> kubeConfig: /root/.kube/config
Aug 27 01:47:13.987: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:23.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6655" for this suite.

• [SLOW TEST:22.930 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":133,"skipped":2135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:23.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 27 01:47:23.743: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 01:47:23.843: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 01:47:23.845: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 27 01:47:23.851: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.851: INFO: 	Container app ready: true, restart count 0
Aug 27 01:47:23.851: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.851: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 01:47:23.851: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.851: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 01:47:23.851: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 27 01:47:23.876: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.876: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 01:47:23.876: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.876: INFO: 	Container app ready: true, restart count 0
Aug 27 01:47:23.876: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.876: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 01:47:23.876: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 27 01:47:23.876: INFO: 	Container httpd ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162efcc5b818c020], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:24.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5264" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":134,"skipped":2208,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:24.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 27 01:47:24.961: INFO: namespace kubectl-5850
Aug 27 01:47:24.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5850'
Aug 27 01:47:25.325: INFO: stderr: ""
Aug 27 01:47:25.325: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 27 01:47:26.329: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:26.329: INFO: Found 0 / 1
Aug 27 01:47:27.329: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:27.329: INFO: Found 0 / 1
Aug 27 01:47:28.393: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:28.393: INFO: Found 0 / 1
Aug 27 01:47:29.452: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:29.452: INFO: Found 0 / 1
Aug 27 01:47:30.330: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:30.330: INFO: Found 0 / 1
Aug 27 01:47:31.329: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:31.329: INFO: Found 1 / 1
Aug 27 01:47:31.329: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 27 01:47:31.332: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:47:31.332: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 01:47:31.332: INFO: wait on agnhost-master startup in kubectl-5850 
Aug 27 01:47:31.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-8rgz4 agnhost-master --namespace=kubectl-5850'
Aug 27 01:47:31.450: INFO: stderr: ""
Aug 27 01:47:31.450: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 27 01:47:31.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5850'
Aug 27 01:47:31.811: INFO: stderr: ""
Aug 27 01:47:31.811: INFO: stdout: "service/rm2 exposed\n"
Aug 27 01:47:32.004: INFO: Service rm2 in namespace kubectl-5850 found.
STEP: exposing service
Aug 27 01:47:34.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5850'
Aug 27 01:47:34.165: INFO: stderr: ""
Aug 27 01:47:34.165: INFO: stdout: "service/rm3 exposed\n"
Aug 27 01:47:34.201: INFO: Service rm3 in namespace kubectl-5850 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:36.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5850" for this suite.

• [SLOW TEST:11.312 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":135,"skipped":2211,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:36.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Aug 27 01:47:36.412: INFO: Waiting up to 5m0s for pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a" in namespace "var-expansion-959" to be "success or failure"
Aug 27 01:47:36.482: INFO: Pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a": Phase="Pending", Reason="", readiness=false. Elapsed: 70.693792ms
Aug 27 01:47:38.486: INFO: Pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074521718s
Aug 27 01:47:40.490: INFO: Pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a": Phase="Running", Reason="", readiness=true. Elapsed: 4.078247653s
Aug 27 01:47:42.495: INFO: Pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082880307s
STEP: Saw pod success
Aug 27 01:47:42.495: INFO: Pod "var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a" satisfied condition "success or failure"
Aug 27 01:47:42.498: INFO: Trying to get logs from node jerma-worker pod var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a container dapi-container: 
STEP: delete the pod
Aug 27 01:47:42.519: INFO: Waiting for pod var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a to disappear
Aug 27 01:47:42.542: INFO: Pod var-expansion-33784f0f-b14f-466d-baec-9ba10214b71a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-959" for this suite.

• [SLOW TEST:6.335 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2227,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:42.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 27 01:47:42.737: INFO: >>> kubeConfig: /root/.kube/config
Aug 27 01:47:44.732: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:47:54.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3155" for this suite.

• [SLOW TEST:12.215 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":137,"skipped":2237,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:47:54.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:48:00.923: INFO: Waiting up to 5m0s for pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46" in namespace "pods-9529" to be "success or failure"
Aug 27 01:48:00.944: INFO: Pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46": Phase="Pending", Reason="", readiness=false. Elapsed: 20.194421ms
Aug 27 01:48:03.014: INFO: Pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090286178s
Aug 27 01:48:05.018: INFO: Pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094250906s
Aug 27 01:48:07.022: INFO: Pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098274868s
STEP: Saw pod success
Aug 27 01:48:07.022: INFO: Pod "client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46" satisfied condition "success or failure"
Aug 27 01:48:07.025: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46 container env3cont: 
STEP: delete the pod
Aug 27 01:48:07.071: INFO: Waiting for pod client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46 to disappear
Aug 27 01:48:07.076: INFO: Pod client-envvars-23479b4c-6618-4ce5-9f28-997a64524a46 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9529" for this suite.

• [SLOW TEST:12.317 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2263,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:07.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Aug 27 01:48:07.659: INFO: created pod pod-service-account-defaultsa
Aug 27 01:48:07.659: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 27 01:48:07.663: INFO: created pod pod-service-account-mountsa
Aug 27 01:48:07.663: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 27 01:48:07.669: INFO: created pod pod-service-account-nomountsa
Aug 27 01:48:07.669: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 27 01:48:07.699: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 27 01:48:07.699: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 27 01:48:07.720: INFO: created pod pod-service-account-mountsa-mountspec
Aug 27 01:48:07.720: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 27 01:48:07.813: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 27 01:48:07.813: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 27 01:48:07.932: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 27 01:48:07.932: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 27 01:48:07.944: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 27 01:48:07.944: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 27 01:48:08.056: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 27 01:48:08.056: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:08.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7452" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":139,"skipped":2271,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:08.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-da749803-ce6b-428c-945e-342008350fcc
STEP: Creating a pod to test consume configMaps
Aug 27 01:48:08.785: INFO: Waiting up to 5m0s for pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9" in namespace "configmap-9844" to be "success or failure"
Aug 27 01:48:08.813: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.733926ms
Aug 27 01:48:11.033: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248439925s
Aug 27 01:48:13.615: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.830206337s
Aug 27 01:48:15.807: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.022699787s
Aug 27 01:48:17.886: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.101605937s
Aug 27 01:48:20.395: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.610346488s
Aug 27 01:48:22.453: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.668474795s
Aug 27 01:48:24.461: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.676075404s
STEP: Saw pod success
Aug 27 01:48:24.461: INFO: Pod "pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9" satisfied condition "success or failure"
Aug 27 01:48:24.463: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9 container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:48:24.551: INFO: Waiting for pod pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9 to disappear
Aug 27 01:48:24.568: INFO: Pod pod-configmaps-32e230ac-17fd-40b1-b4e7-b62e6d176bb9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:24.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9844" for this suite.

• [SLOW TEST:16.377 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:24.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:48:24.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 27 01:48:27.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1647 create -f -'
Aug 27 01:48:31.315: INFO: stderr: ""
Aug 27 01:48:31.315: INFO: stdout: "e2e-test-crd-publish-openapi-6752-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 27 01:48:31.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1647 delete e2e-test-crd-publish-openapi-6752-crds test-cr'
Aug 27 01:48:31.436: INFO: stderr: ""
Aug 27 01:48:31.436: INFO: stdout: "e2e-test-crd-publish-openapi-6752-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 27 01:48:31.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1647 apply -f -'
Aug 27 01:48:31.688: INFO: stderr: ""
Aug 27 01:48:31.688: INFO: stdout: "e2e-test-crd-publish-openapi-6752-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 27 01:48:31.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1647 delete e2e-test-crd-publish-openapi-6752-crds test-cr'
Aug 27 01:48:31.798: INFO: stderr: ""
Aug 27 01:48:31.798: INFO: stdout: "e2e-test-crd-publish-openapi-6752-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 27 01:48:31.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6752-crds'
Aug 27 01:48:32.079: INFO: stderr: ""
Aug 27 01:48:32.079: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6752-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:34.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1647" for this suite.

• [SLOW TEST:10.397 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":141,"skipped":2315,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:34.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-60b27d4e-dc02-4bbd-9646-3f9606691666
STEP: Creating a pod to test consume configMaps
Aug 27 01:48:35.170: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48" in namespace "projected-4130" to be "success or failure"
Aug 27 01:48:35.179: INFO: Pod "pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48": Phase="Pending", Reason="", readiness=false. Elapsed: 9.384625ms
Aug 27 01:48:37.184: INFO: Pod "pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013553296s
Aug 27 01:48:39.187: INFO: Pod "pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017433231s
STEP: Saw pod success
Aug 27 01:48:39.188: INFO: Pod "pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48" satisfied condition "success or failure"
Aug 27 01:48:39.191: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 01:48:39.237: INFO: Waiting for pod pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48 to disappear
Aug 27 01:48:39.273: INFO: Pod pod-projected-configmaps-8c9cdb1d-b20d-4969-a940-21e80da93c48 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4130" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2328,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:39.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:48:39.729: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 27 01:48:41.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:48:43.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089719, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:48:47.907: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:48:47.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2892" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:12.276 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":143,"skipped":2329,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:51.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:48:52.546: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:48:54.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089733, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:48:56.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089733, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089732, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:48:59.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:48:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3843" for this suite.
STEP: Destroying namespace "webhook-3843-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.340 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":144,"skipped":2333,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:48:59.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 27 01:49:05.085: INFO: Successfully updated pod "labelsupdatef28a9d45-93ed-4c8f-98f7-449ef21d0922"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-779" for this suite.

• [SLOW TEST:9.225 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2336,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:09.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 27 01:49:09.195: INFO: Waiting up to 5m0s for pod "pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa" in namespace "emptydir-6709" to be "success or failure"
Aug 27 01:49:09.198: INFO: Pod "pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.426248ms
Aug 27 01:49:11.229: INFO: Pod "pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03398073s
Aug 27 01:49:13.249: INFO: Pod "pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054149777s
STEP: Saw pod success
Aug 27 01:49:13.249: INFO: Pod "pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa" satisfied condition "success or failure"
Aug 27 01:49:13.252: INFO: Trying to get logs from node jerma-worker pod pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa container test-container: 
STEP: delete the pod
Aug 27 01:49:13.385: INFO: Waiting for pod pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa to disappear
Aug 27 01:49:13.390: INFO: Pod pod-a934e05b-2bd8-4beb-b0f1-af0fe5b69baa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:13.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6709" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:13.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:49:13.455: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 27 01:49:18.458: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 27 01:49:18.458: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 27 01:49:18.553: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-7369 /apis/apps/v1/namespaces/deployment-7369/deployments/test-cleanup-deployment 5890109f-387b-4094-ad38-bd7fce4b86b3 4088851 1 2020-08-27 01:49:18 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0066a3a28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 01:49:18.582: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-7369 /apis/apps/v1/namespaces/deployment-7369/replicasets/test-cleanup-deployment-55ffc6b7b6 49c0b988-7521-4f83-a019-acf3152374c9 4088854 1 2020-08-27 01:49:18 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 5890109f-387b-4094-ad38-bd7fce4b86b3 0xc004700f07 0xc004700f08}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004700f78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:49:18.582: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 27 01:49:18.582: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-7369 /apis/apps/v1/namespaces/deployment-7369/replicasets/test-cleanup-controller eca698df-1261-468c-a4ae-520b7d307ca4 4088853 1 2020-08-27 01:49:13 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 5890109f-387b-4094-ad38-bd7fce4b86b3 0xc004700e27 0xc004700e28}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004700e88  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 27 01:49:18.636: INFO: Pod "test-cleanup-controller-5cfjq" is available:
&Pod{ObjectMeta:{test-cleanup-controller-5cfjq test-cleanup-controller- deployment-7369 /api/v1/namespaces/deployment-7369/pods/test-cleanup-controller-5cfjq ebd8d1ed-8f05-42fe-b304-9ffadc4199f4 4088842 0 2020-08-27 01:49:13 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller eca698df-1261-468c-a4ae-520b7d307ca4 0xc0047013e7 0xc0047013e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4486s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4486s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4486s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:49:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:49:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:49:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:49:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.177,StartTime:2020-08-27 01:49:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-27 01:49:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c4b603a8bd2279aaa3110fe4127314d6e344ed14e3c9ae9757666af49a624b9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 27 01:49:18.637: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-f5l7z" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-f5l7z test-cleanup-deployment-55ffc6b7b6- deployment-7369 /api/v1/namespaces/deployment-7369/pods/test-cleanup-deployment-55ffc6b7b6-f5l7z bfd8e5a6-6de0-4411-8144-e567a3b6827c 4088860 0 2020-08-27 01:49:18 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 49c0b988-7521-4f83-a019-acf3152374c9 0xc004701577 0xc004701578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4486s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4486s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4486s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 01:49:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:18.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7369" for this suite.

• [SLOW TEST:5.330 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":147,"skipped":2386,"failed":0}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:18.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-kvdp
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 01:49:18.831: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kvdp" in namespace "subpath-2111" to be "success or failure"
Aug 27 01:49:18.863: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Pending", Reason="", readiness=false. Elapsed: 31.705778ms
Aug 27 01:49:20.945: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113457798s
Aug 27 01:49:22.956: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125229872s
Aug 27 01:49:25.186: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 6.355132691s
Aug 27 01:49:27.190: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 8.358931768s
Aug 27 01:49:29.194: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 10.36322096s
Aug 27 01:49:31.198: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 12.367045214s
Aug 27 01:49:33.203: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 14.371531244s
Aug 27 01:49:35.206: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 16.375091169s
Aug 27 01:49:37.210: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 18.379330959s
Aug 27 01:49:39.214: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 20.38319925s
Aug 27 01:49:41.218: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Running", Reason="", readiness=true. Elapsed: 22.387056754s
Aug 27 01:49:43.222: INFO: Pod "pod-subpath-test-configmap-kvdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.39064423s
STEP: Saw pod success
Aug 27 01:49:43.222: INFO: Pod "pod-subpath-test-configmap-kvdp" satisfied condition "success or failure"
Aug 27 01:49:43.224: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-kvdp container test-container-subpath-configmap-kvdp: 
STEP: delete the pod
Aug 27 01:49:43.319: INFO: Waiting for pod pod-subpath-test-configmap-kvdp to disappear
Aug 27 01:49:43.391: INFO: Pod pod-subpath-test-configmap-kvdp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-kvdp
Aug 27 01:49:43.391: INFO: Deleting pod "pod-subpath-test-configmap-kvdp" in namespace "subpath-2111"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:43.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2111" for this suite.

• [SLOW TEST:24.700 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":148,"skipped":2386,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:43.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 27 01:49:48.053: INFO: Successfully updated pod "pod-update-7b29d916-0d36-467a-9223-86f37178a140"
STEP: verifying the updated pod is in kubernetes
Aug 27 01:49:48.077: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:48.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9196" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2407,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:48.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Aug 27 01:49:48.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1142 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 27 01:49:51.191: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0827 01:49:51.116267    1987 log.go:172] (0xc0009e4420) (0xc0007b0140) Create stream\nI0827 01:49:51.116328    1987 log.go:172] (0xc0009e4420) (0xc0007b0140) Stream added, broadcasting: 1\nI0827 01:49:51.119203    1987 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0827 01:49:51.119251    1987 log.go:172] (0xc0009e4420) (0xc00082c000) Create stream\nI0827 01:49:51.119267    1987 log.go:172] (0xc0009e4420) (0xc00082c000) Stream added, broadcasting: 3\nI0827 01:49:51.120220    1987 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0827 01:49:51.120259    1987 log.go:172] (0xc0009e4420) (0xc00070da40) Create stream\nI0827 01:49:51.120272    1987 log.go:172] (0xc0009e4420) (0xc00070da40) Stream added, broadcasting: 5\nI0827 01:49:51.121238    1987 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0827 01:49:51.121269    1987 log.go:172] (0xc0009e4420) (0xc00070dae0) Create stream\nI0827 01:49:51.121284    1987 log.go:172] (0xc0009e4420) (0xc00070dae0) Stream added, broadcasting: 7\nI0827 01:49:51.122068    1987 log.go:172] (0xc0009e4420) Reply frame received for 7\nI0827 01:49:51.122299    1987 log.go:172] (0xc00082c000) (3) Writing data frame\nI0827 01:49:51.122410    1987 log.go:172] (0xc00082c000) (3) Writing data frame\nI0827 01:49:51.123244    1987 log.go:172] (0xc0009e4420) Data frame received for 5\nI0827 01:49:51.123272    1987 log.go:172] (0xc00070da40) (5) Data frame handling\nI0827 01:49:51.123290    1987 log.go:172] (0xc00070da40) (5) Data frame sent\nI0827 01:49:51.123963    1987 log.go:172] (0xc0009e4420) Data frame received for 5\nI0827 01:49:51.123983    1987 log.go:172] (0xc00070da40) (5) Data frame handling\nI0827 01:49:51.124010    1987 log.go:172] (0xc00070da40) (5) Data frame sent\nI0827 01:49:51.160178    1987 log.go:172] (0xc0009e4420) Data frame received for 7\nI0827 01:49:51.160215    1987 log.go:172] (0xc00070dae0) (7) Data frame handling\nI0827 01:49:51.160241    1987 log.go:172] (0xc0009e4420) Data frame received for 1\nI0827 01:49:51.160254    1987 log.go:172] (0xc0007b0140) (1) Data frame handling\nI0827 01:49:51.160271    1987 log.go:172] (0xc0007b0140) (1) Data frame sent\nI0827 01:49:51.160292    1987 log.go:172] (0xc0009e4420) (0xc0007b0140) Stream removed, broadcasting: 1\nI0827 01:49:51.160328    1987 log.go:172] (0xc0009e4420) Data frame received for 5\nI0827 01:49:51.160376    1987 log.go:172] (0xc0009e4420) (0xc00082c000) Stream removed, broadcasting: 3\nI0827 01:49:51.160425    1987 log.go:172] (0xc00070da40) (5) Data frame handling\nI0827 01:49:51.160449    1987 log.go:172] (0xc0009e4420) Go away received\nI0827 01:49:51.160627    1987 log.go:172] (0xc0009e4420) (0xc0007b0140) Stream removed, broadcasting: 1\nI0827 01:49:51.160646    1987 log.go:172] (0xc0009e4420) (0xc00082c000) Stream removed, broadcasting: 3\nI0827 01:49:51.160666    1987 log.go:172] (0xc0009e4420) (0xc00070da40) Stream removed, broadcasting: 5\nI0827 01:49:51.160682    1987 log.go:172] (0xc0009e4420) (0xc00070dae0) Stream removed, broadcasting: 7\n"
Aug 27 01:49:51.191: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:49:53.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1142" for this suite.

• [SLOW TEST:5.178 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":150,"skipped":2409,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:49:53.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:49:54.311: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:49:56.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:49:58.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089794, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:50:01.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:50:02.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-899" for this suite.
STEP: Destroying namespace "webhook-899-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.889 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":151,"skipped":2426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:50:02.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-8922
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8922
STEP: Deleting pre-stop pod
Aug 27 01:50:15.330: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:50:15.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8922" for this suite.

• [SLOW TEST:13.191 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":152,"skipped":2521,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:50:15.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:50:15.418: INFO: Create a RollingUpdate DaemonSet
Aug 27 01:50:15.422: INFO: Check that daemon pods launch on every node of the cluster
Aug 27 01:50:15.449: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:15.478: INFO: Number of nodes with available pods: 0
Aug 27 01:50:15.478: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:50:16.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:16.554: INFO: Number of nodes with available pods: 0
Aug 27 01:50:16.554: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:50:17.483: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:17.486: INFO: Number of nodes with available pods: 0
Aug 27 01:50:17.486: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:50:18.564: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:18.566: INFO: Number of nodes with available pods: 0
Aug 27 01:50:18.566: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:50:19.482: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:19.486: INFO: Number of nodes with available pods: 0
Aug 27 01:50:19.486: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:50:20.483: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:20.506: INFO: Number of nodes with available pods: 2
Aug 27 01:50:20.506: INFO: Number of running nodes: 2, number of available pods: 2
Aug 27 01:50:20.506: INFO: Update the DaemonSet to trigger a rollout
Aug 27 01:50:20.542: INFO: Updating DaemonSet daemon-set
Aug 27 01:50:31.632: INFO: Roll back the DaemonSet before rollout is complete
Aug 27 01:50:31.637: INFO: Updating DaemonSet daemon-set
Aug 27 01:50:31.637: INFO: Make sure DaemonSet rollback is complete
Aug 27 01:50:31.694: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:31.694: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:31.710: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:32.714: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:32.714: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:32.718: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:33.715: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:33.715: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:33.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:34.715: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:34.715: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:34.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:35.715: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:35.715: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:35.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:36.715: INFO: Wrong image for pod: daemon-set-8mv5h. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 27 01:50:36.715: INFO: Pod daemon-set-8mv5h is not available
Aug 27 01:50:36.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:50:37.715: INFO: Pod daemon-set-96w5p is not available
Aug 27 01:50:37.720: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2462, will wait for the garbage collector to delete the pods
Aug 27 01:50:37.786: INFO: Deleting DaemonSet.extensions daemon-set took: 6.792288ms
Aug 27 01:50:38.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283138ms
Aug 27 01:50:51.796: INFO: Number of nodes with available pods: 0
Aug 27 01:50:51.796: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 01:50:51.798: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2462/daemonsets","resourceVersion":"4089462"},"items":null}

Aug 27 01:50:51.800: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2462/pods","resourceVersion":"4089462"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:50:51.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2462" for this suite.

• [SLOW TEST:36.469 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":153,"skipped":2531,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:50:51.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4495.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.111.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.111.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.111.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.111.113_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4495.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4495.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.111.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.111.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.111.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.111.113_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:51:00.049: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.053: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.056: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.102: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.108: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:00.252: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:05.256: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.260: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.265: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.283: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.285: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.287: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.290: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:05.307: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:10.257: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.261: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.265: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.268: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.292: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.300: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:10.319: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:15.257: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.260: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.264: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.284: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.287: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.290: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.292: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:15.305: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:20.258: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.262: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.265: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.269: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.292: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.298: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.301: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:20.320: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:25.265: INFO: Unable to read wheezy_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.268: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.270: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.272: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.288: INFO: Unable to read jessie_udp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.294: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.296: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local from pod dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b: the server could not find the requested resource (get pods dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b)
Aug 27 01:51:25.312: INFO: Lookups using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b failed for: [wheezy_udp@dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@dns-test-service.dns-4495.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_udp@dns-test-service.dns-4495.svc.cluster.local jessie_tcp@dns-test-service.dns-4495.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4495.svc.cluster.local]

Aug 27 01:51:30.322: INFO: DNS probes using dns-4495/dns-test-a9306f70-d255-43ba-b434-701b3c9c7e5b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:51:31.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4495" for this suite.

• [SLOW TEST:39.499 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":154,"skipped":2568,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:51:31.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-daf3f928-9841-4a0c-bf82-afb67c7e68eb
STEP: Creating a pod to test consume secrets
Aug 27 01:51:31.451: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2" in namespace "projected-5991" to be "success or failure"
Aug 27 01:51:31.454: INFO: Pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843686ms
Aug 27 01:51:33.515: INFO: Pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064486134s
Aug 27 01:51:35.948: INFO: Pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2": Phase="Running", Reason="", readiness=true. Elapsed: 4.497179454s
Aug 27 01:51:38.108: INFO: Pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.657489828s
STEP: Saw pod success
Aug 27 01:51:38.108: INFO: Pod "pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2" satisfied condition "success or failure"
Aug 27 01:51:38.155: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2 container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 01:51:38.338: INFO: Waiting for pod pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2 to disappear
Aug 27 01:51:38.389: INFO: Pod pod-projected-secrets-c101d7c0-2878-420b-91a8-1d2b688955a2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:51:38.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5991" for this suite.

• [SLOW TEST:7.216 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2584,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:51:38.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Aug 27 01:51:39.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-394'
Aug 27 01:51:39.814: INFO: stderr: ""
Aug 27 01:51:39.814: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 27 01:51:40.929: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:40.929: INFO: Found 0 / 1
Aug 27 01:51:41.845: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:41.845: INFO: Found 0 / 1
Aug 27 01:51:42.917: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:42.917: INFO: Found 0 / 1
Aug 27 01:51:43.817: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:43.817: INFO: Found 0 / 1
Aug 27 01:51:44.827: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:44.827: INFO: Found 0 / 1
Aug 27 01:51:45.818: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:45.818: INFO: Found 1 / 1
Aug 27 01:51:45.818: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 27 01:51:45.821: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:45.821: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 27 01:51:45.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-mvh2t --namespace=kubectl-394 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 27 01:51:45.915: INFO: stderr: ""
Aug 27 01:51:45.915: INFO: stdout: "pod/agnhost-master-mvh2t patched\n"
STEP: checking annotations
Aug 27 01:51:45.920: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 27 01:51:45.920: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:51:45.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-394" for this suite.

• [SLOW TEST:7.394 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":156,"skipped":2609,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:51:45.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:51:46.339: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:51:48.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089906, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089906, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089906, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089906, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:51:51.381: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:51:51.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2697" for this suite.
STEP: Destroying namespace "webhook-2697-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.922 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":157,"skipped":2613,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:51:51.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:51:53.212: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:51:55.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089913, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089913, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089913, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734089913, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:51:58.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:51:58.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3829" for this suite.
STEP: Destroying namespace "webhook-3829-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.681 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":158,"skipped":2624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:51:58.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 27 01:52:07.047: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:07.053: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:09.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:09.057: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:11.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:11.058: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:13.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:13.058: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:15.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:15.057: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:17.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:17.057: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:19.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:19.058: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:21.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:21.058: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 27 01:52:23.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 27 01:52:23.366: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:52:23.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8017" for this suite.

• [SLOW TEST:24.883 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2672,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:52:23.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-9e545fde-2f27-4ee0-a1ef-4606d32018d3
STEP: Creating a pod to test consume configMaps
Aug 27 01:52:23.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46" in namespace "configmap-9520" to be "success or failure"
Aug 27 01:52:24.019: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46": Phase="Pending", Reason="", readiness=false. Elapsed: 152.979692ms
Aug 27 01:52:26.026: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15989578s
Aug 27 01:52:28.030: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164479075s
Aug 27 01:52:30.050: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46": Phase="Running", Reason="", readiness=true. Elapsed: 6.184272817s
Aug 27 01:52:32.054: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18842122s
STEP: Saw pod success
Aug 27 01:52:32.054: INFO: Pod "pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46" satisfied condition "success or failure"
Aug 27 01:52:32.057: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46 container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:52:32.141: INFO: Waiting for pod pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46 to disappear
Aug 27 01:52:32.151: INFO: Pod pod-configmaps-ca94df9b-a29a-49be-bbcb-945fb1e4ce46 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:52:32.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9520" for this suite.

• [SLOW TEST:8.744 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2685,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:52:32.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6903
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-6903
Aug 27 01:52:32.289: INFO: Found 0 stateful pods, waiting for 1
Aug 27 01:52:42.293: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 01:52:42.317: INFO: Deleting all statefulset in ns statefulset-6903
Aug 27 01:52:42.354: INFO: Scaling statefulset ss to 0
Aug 27 01:53:02.423: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 01:53:02.426: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:53:02.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6903" for this suite.

• [SLOW TEST:30.339 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":161,"skipped":2726,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:53:02.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 27 01:53:03.103: INFO: Waiting up to 5m0s for pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0" in namespace "emptydir-8999" to be "success or failure"
Aug 27 01:53:03.106: INFO: Pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.032022ms
Aug 27 01:53:05.110: INFO: Pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006403878s
Aug 27 01:53:07.114: INFO: Pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.010702966s
Aug 27 01:53:09.117: INFO: Pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01388117s
STEP: Saw pod success
Aug 27 01:53:09.117: INFO: Pod "pod-43cc5779-2503-4e69-93b2-ee436266b9f0" satisfied condition "success or failure"
Aug 27 01:53:09.119: INFO: Trying to get logs from node jerma-worker pod pod-43cc5779-2503-4e69-93b2-ee436266b9f0 container test-container: 
STEP: delete the pod
Aug 27 01:53:09.179: INFO: Waiting for pod pod-43cc5779-2503-4e69-93b2-ee436266b9f0 to disappear
Aug 27 01:53:09.198: INFO: Pod pod-43cc5779-2503-4e69-93b2-ee436266b9f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:53:09.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8999" for this suite.

• [SLOW TEST:6.706 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2768,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:53:09.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:53:09.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 27 01:53:11.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9877 create -f -'
Aug 27 01:53:16.075: INFO: stderr: ""
Aug 27 01:53:16.075: INFO: stdout: "e2e-test-crd-publish-openapi-7945-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 27 01:53:16.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9877 delete e2e-test-crd-publish-openapi-7945-crds test-cr'
Aug 27 01:53:16.209: INFO: stderr: ""
Aug 27 01:53:16.209: INFO: stdout: "e2e-test-crd-publish-openapi-7945-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 27 01:53:16.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9877 apply -f -'
Aug 27 01:53:16.465: INFO: stderr: ""
Aug 27 01:53:16.465: INFO: stdout: "e2e-test-crd-publish-openapi-7945-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 27 01:53:16.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9877 delete e2e-test-crd-publish-openapi-7945-crds test-cr'
Aug 27 01:53:16.637: INFO: stderr: ""
Aug 27 01:53:16.637: INFO: stdout: "e2e-test-crd-publish-openapi-7945-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 27 01:53:16.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7945-crds'
Aug 27 01:53:16.862: INFO: stderr: ""
Aug 27 01:53:16.862: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7945-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:53:18.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9877" for this suite.

• [SLOW TEST:9.518 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":163,"skipped":2775,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:53:18.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:53:18.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 27 01:53:19.371: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:19Z generation:1 name:name1 resourceVersion:4090362 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d58bfd50-485e-4f1a-b3ae-93ab80d7db6e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 27 01:53:29.376: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:29Z generation:1 name:name2 resourceVersion:4090396 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:66bf46e7-a418-406f-8bc8-99840679db5a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 27 01:53:39.381: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:19Z generation:2 name:name1 resourceVersion:4090426 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d58bfd50-485e-4f1a-b3ae-93ab80d7db6e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 27 01:53:49.387: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:29Z generation:2 name:name2 resourceVersion:4090456 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:66bf46e7-a418-406f-8bc8-99840679db5a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 27 01:53:59.405: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:19Z generation:2 name:name1 resourceVersion:4090486 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d58bfd50-485e-4f1a-b3ae-93ab80d7db6e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 27 01:54:09.412: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-27T01:53:29Z generation:2 name:name2 resourceVersion:4090516 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:66bf46e7-a418-406f-8bc8-99840679db5a] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:54:19.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4374" for this suite.

• [SLOW TEST:61.208 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":164,"skipped":2792,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:54:19.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 01:54:20.117: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 01:54:20.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:20.163: INFO: Number of nodes with available pods: 0
Aug 27 01:54:20.163: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:21.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:21.174: INFO: Number of nodes with available pods: 0
Aug 27 01:54:21.174: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:22.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:22.171: INFO: Number of nodes with available pods: 0
Aug 27 01:54:22.171: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:23.243: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:23.247: INFO: Number of nodes with available pods: 0
Aug 27 01:54:23.247: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:24.194: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:24.198: INFO: Number of nodes with available pods: 0
Aug 27 01:54:24.198: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:25.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:25.170: INFO: Number of nodes with available pods: 1
Aug 27 01:54:25.170: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:54:26.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:26.181: INFO: Number of nodes with available pods: 2
Aug 27 01:54:26.181: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 27 01:54:26.314: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:26.314: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:26.318: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:27.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:27.323: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:27.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:28.632: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:28.632: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:28.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:29.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:29.323: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:29.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:30.937: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:30.937: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:30.942: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:31.591: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:31.591: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:31.595: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:32.572: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:32.572: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:32.572: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:32.577: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:33.356: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:33.356: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:33.356: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:33.431: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:34.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:34.323: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:34.323: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:34.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:35.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:35.322: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:35.322: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:35.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:36.572: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:36.572: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:36.572: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:36.575: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:37.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:37.322: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:37.322: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:37.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:38.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:38.322: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:38.322: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:38.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:39.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:39.322: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:39.322: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:39.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:40.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:40.322: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:40.322: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:40.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:41.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:41.323: INFO: Wrong image for pod: daemon-set-gggmz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:41.323: INFO: Pod daemon-set-gggmz is not available
Aug 27 01:54:41.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:43.004: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:43.004: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:43.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:44.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:44.323: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:44.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:45.351: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:45.351: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:45.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:46.410: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:46.410: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:46.413: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:47.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:47.322: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:47.324: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:48.638: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:48.638: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:49.087: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:49.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:49.322: INFO: Pod daemon-set-jmm55 is not available
Aug 27 01:54:49.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:50.458: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:50.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:51.776: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:51.781: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:52.698: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:52.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:53.476: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:53.480: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:54.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:54.323: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:54.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:55.464: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:55.464: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:55.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:56.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:56.323: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:56.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:57.638: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:57.638: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:57.983: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:58.338: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:58.338: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:58.341: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:54:59.322: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:54:59.322: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:54:59.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:00.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:55:00.323: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:55:00.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:01.323: INFO: Wrong image for pod: daemon-set-clgsp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Aug 27 01:55:01.323: INFO: Pod daemon-set-clgsp is not available
Aug 27 01:55:01.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:03.226: INFO: Pod daemon-set-nv8g6 is not available
Aug 27 01:55:03.231: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:03.404: INFO: Pod daemon-set-nv8g6 is not available
Aug 27 01:55:03.414: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 27 01:55:03.434: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:03.490: INFO: Number of nodes with available pods: 1
Aug 27 01:55:03.490: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:04.584: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:04.680: INFO: Number of nodes with available pods: 1
Aug 27 01:55:04.680: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:05.597: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:05.602: INFO: Number of nodes with available pods: 1
Aug 27 01:55:05.602: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:06.496: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:06.499: INFO: Number of nodes with available pods: 1
Aug 27 01:55:06.499: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:07.542: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:08.337: INFO: Number of nodes with available pods: 1
Aug 27 01:55:08.337: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:08.632: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:08.689: INFO: Number of nodes with available pods: 1
Aug 27 01:55:08.689: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:09.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:09.496: INFO: Number of nodes with available pods: 1
Aug 27 01:55:09.496: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:10.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:10.502: INFO: Number of nodes with available pods: 1
Aug 27 01:55:10.502: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:11.560: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:11.574: INFO: Number of nodes with available pods: 1
Aug 27 01:55:11.574: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 01:55:12.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 01:55:12.497: INFO: Number of nodes with available pods: 2
Aug 27 01:55:12.497: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8742, will wait for the garbage collector to delete the pods
Aug 27 01:55:12.566: INFO: Deleting DaemonSet.extensions daemon-set took: 6.207463ms
Aug 27 01:55:13.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.70619ms
Aug 27 01:55:21.770: INFO: Number of nodes with available pods: 0
Aug 27 01:55:21.770: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 01:55:21.772: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8742/daemonsets","resourceVersion":"4090789"},"items":null}

Aug 27 01:55:21.794: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8742/pods","resourceVersion":"4090789"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:55:21.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8742" for this suite.

• [SLOW TEST:61.874 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":165,"skipped":2821,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:55:21.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Aug 27 01:55:21.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7945'
Aug 27 01:55:22.179: INFO: stderr: ""
Aug 27 01:55:22.179: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 01:55:22.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7945'
Aug 27 01:55:22.298: INFO: stderr: ""
Aug 27 01:55:22.298: INFO: stdout: "update-demo-nautilus-4lzfk update-demo-nautilus-v8j9w "
Aug 27 01:55:22.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lzfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:22.431: INFO: stderr: ""
Aug 27 01:55:22.431: INFO: stdout: ""
Aug 27 01:55:22.431: INFO: update-demo-nautilus-4lzfk is created but not running
Aug 27 01:55:27.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7945'
Aug 27 01:55:27.533: INFO: stderr: ""
Aug 27 01:55:27.533: INFO: stdout: "update-demo-nautilus-4lzfk update-demo-nautilus-v8j9w "
Aug 27 01:55:27.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lzfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:27.630: INFO: stderr: ""
Aug 27 01:55:27.630: INFO: stdout: "true"
Aug 27 01:55:27.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lzfk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:27.725: INFO: stderr: ""
Aug 27 01:55:27.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 01:55:27.725: INFO: validating pod update-demo-nautilus-4lzfk
Aug 27 01:55:27.728: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 01:55:27.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 01:55:27.728: INFO: update-demo-nautilus-4lzfk is verified up and running
Aug 27 01:55:27.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v8j9w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:27.812: INFO: stderr: ""
Aug 27 01:55:27.812: INFO: stdout: "true"
Aug 27 01:55:27.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v8j9w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:27.909: INFO: stderr: ""
Aug 27 01:55:27.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 01:55:27.909: INFO: validating pod update-demo-nautilus-v8j9w
Aug 27 01:55:27.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 01:55:27.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 01:55:27.915: INFO: update-demo-nautilus-v8j9w is verified up and running
STEP: rolling-update to new replication controller
Aug 27 01:55:27.917: INFO: scanned /root for discovery docs: 
Aug 27 01:55:27.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7945'
Aug 27 01:55:50.781: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 27 01:55:50.781: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 01:55:50.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7945'
Aug 27 01:55:50.958: INFO: stderr: ""
Aug 27 01:55:50.958: INFO: stdout: "update-demo-kitten-4xskq update-demo-kitten-l7wql "
Aug 27 01:55:50.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4xskq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:51.206: INFO: stderr: ""
Aug 27 01:55:51.206: INFO: stdout: "true"
Aug 27 01:55:51.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4xskq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:51.302: INFO: stderr: ""
Aug 27 01:55:51.302: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 27 01:55:51.302: INFO: validating pod update-demo-kitten-4xskq
Aug 27 01:55:51.306: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 27 01:55:51.306: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 27 01:55:51.306: INFO: update-demo-kitten-4xskq is verified up and running
Aug 27 01:55:51.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l7wql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:51.578: INFO: stderr: ""
Aug 27 01:55:51.578: INFO: stdout: "true"
Aug 27 01:55:51.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l7wql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7945'
Aug 27 01:55:51.668: INFO: stderr: ""
Aug 27 01:55:51.668: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 27 01:55:51.668: INFO: validating pod update-demo-kitten-l7wql
Aug 27 01:55:51.789: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 27 01:55:51.789: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 27 01:55:51.789: INFO: update-demo-kitten-l7wql is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:55:51.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7945" for this suite.

• [SLOW TEST:30.172 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":166,"skipped":2823,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:55:51.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-xst4
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 01:55:52.464: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xst4" in namespace "subpath-3280" to be "success or failure"
Aug 27 01:55:52.950: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Pending", Reason="", readiness=false. Elapsed: 486.497136ms
Aug 27 01:55:55.643: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.179484093s
Aug 27 01:55:57.685: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.221658386s
Aug 27 01:55:59.737: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 7.273039454s
Aug 27 01:56:01.741: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 9.277344481s
Aug 27 01:56:03.744: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 11.280486528s
Aug 27 01:56:05.949: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 13.48552139s
Aug 27 01:56:07.953: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 15.489251727s
Aug 27 01:56:09.974: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 17.509789796s
Aug 27 01:56:12.045: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 19.581697491s
Aug 27 01:56:14.430: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 21.966207029s
Aug 27 01:56:16.435: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 23.971019329s
Aug 27 01:56:18.440: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Running", Reason="", readiness=true. Elapsed: 25.976008025s
Aug 27 01:56:20.452: INFO: Pod "pod-subpath-test-configmap-xst4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.988739628s
STEP: Saw pod success
Aug 27 01:56:20.452: INFO: Pod "pod-subpath-test-configmap-xst4" satisfied condition "success or failure"
Aug 27 01:56:20.455: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-xst4 container test-container-subpath-configmap-xst4: 
STEP: delete the pod
Aug 27 01:56:20.512: INFO: Waiting for pod pod-subpath-test-configmap-xst4 to disappear
Aug 27 01:56:20.540: INFO: Pod pod-subpath-test-configmap-xst4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xst4
Aug 27 01:56:20.540: INFO: Deleting pod "pod-subpath-test-configmap-xst4" in namespace "subpath-3280"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:56:20.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3280" for this suite.

• [SLOW TEST:28.795 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":167,"skipped":2896,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:56:20.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 01:56:31.888: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.891: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.927: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.931: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.941: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.943: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.946: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:31.949: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:32.000: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:56:37.004: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.007: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.012: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.019: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.021: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.023: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.025: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:37.030: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:56:42.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.008: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.010: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.327: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.330: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.333: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.336: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:42.341: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:56:47.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.008: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.011: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.026: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.028: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.032: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:47.037: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:56:52.004: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.007: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.011: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.018: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.020: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.023: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.025: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:52.030: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:56:57.005: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.008: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.011: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.014: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.044: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.049: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.052: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.054: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local from pod dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f: the server could not find the requested resource (get pods dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f)
Aug 27 01:56:57.059: INFO: Lookups using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5179.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5179.svc.cluster.local jessie_udp@dns-test-service-2.dns-5179.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5179.svc.cluster.local]

Aug 27 01:57:02.038: INFO: DNS probes using dns-5179/dns-test-4d37fd4a-719f-4ac5-b542-f8bf40de830f succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:02.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5179" for this suite.

• [SLOW TEST:41.913 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":168,"skipped":2900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:02.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 27 01:57:09.931: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:10.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6924" for this suite.

• [SLOW TEST:8.284 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":169,"skipped":2923,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:10.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-162
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 01:57:11.144: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 01:57:39.300: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-162 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:57:39.300: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:57:39.337230       6 log.go:172] (0xc002bee370) (0xc001cfafa0) Create stream
I0827 01:57:39.337260       6 log.go:172] (0xc002bee370) (0xc001cfafa0) Stream added, broadcasting: 1
I0827 01:57:39.339120       6 log.go:172] (0xc002bee370) Reply frame received for 1
I0827 01:57:39.339164       6 log.go:172] (0xc002bee370) (0xc002a02000) Create stream
I0827 01:57:39.339176       6 log.go:172] (0xc002bee370) (0xc002a02000) Stream added, broadcasting: 3
I0827 01:57:39.340175       6 log.go:172] (0xc002bee370) Reply frame received for 3
I0827 01:57:39.340224       6 log.go:172] (0xc002bee370) (0xc001cfb040) Create stream
I0827 01:57:39.340241       6 log.go:172] (0xc002bee370) (0xc001cfb040) Stream added, broadcasting: 5
I0827 01:57:39.341257       6 log.go:172] (0xc002bee370) Reply frame received for 5
I0827 01:57:39.426006       6 log.go:172] (0xc002bee370) Data frame received for 5
I0827 01:57:39.426029       6 log.go:172] (0xc001cfb040) (5) Data frame handling
I0827 01:57:39.426049       6 log.go:172] (0xc002bee370) Data frame received for 3
I0827 01:57:39.426080       6 log.go:172] (0xc002a02000) (3) Data frame handling
I0827 01:57:39.426097       6 log.go:172] (0xc002a02000) (3) Data frame sent
I0827 01:57:39.426104       6 log.go:172] (0xc002bee370) Data frame received for 3
I0827 01:57:39.426109       6 log.go:172] (0xc002a02000) (3) Data frame handling
I0827 01:57:39.427461       6 log.go:172] (0xc002bee370) Data frame received for 1
I0827 01:57:39.427492       6 log.go:172] (0xc001cfafa0) (1) Data frame handling
I0827 01:57:39.427523       6 log.go:172] (0xc001cfafa0) (1) Data frame sent
I0827 01:57:39.427607       6 log.go:172] (0xc002bee370) (0xc001cfafa0) Stream removed, broadcasting: 1
I0827 01:57:39.427628       6 log.go:172] (0xc002bee370) Go away received
I0827 01:57:39.427710       6 log.go:172] (0xc002bee370) (0xc001cfafa0) Stream removed, broadcasting: 1
I0827 01:57:39.427732       6 log.go:172] (0xc002bee370) (0xc002a02000) Stream removed, broadcasting: 3
I0827 01:57:39.427751       6 log.go:172] (0xc002bee370) (0xc001cfb040) Stream removed, broadcasting: 5
Aug 27 01:57:39.427: INFO: Found all expected endpoints: [netserver-0]
Aug 27 01:57:39.430: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.191:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-162 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 01:57:39.430: INFO: >>> kubeConfig: /root/.kube/config
I0827 01:57:39.463839       6 log.go:172] (0xc002f784d0) (0xc002598fa0) Create stream
I0827 01:57:39.463869       6 log.go:172] (0xc002f784d0) (0xc002598fa0) Stream added, broadcasting: 1
I0827 01:57:39.465580       6 log.go:172] (0xc002f784d0) Reply frame received for 1
I0827 01:57:39.465618       6 log.go:172] (0xc002f784d0) (0xc0013d8000) Create stream
I0827 01:57:39.465626       6 log.go:172] (0xc002f784d0) (0xc0013d8000) Stream added, broadcasting: 3
I0827 01:57:39.466269       6 log.go:172] (0xc002f784d0) Reply frame received for 3
I0827 01:57:39.466300       6 log.go:172] (0xc002f784d0) (0xc0025990e0) Create stream
I0827 01:57:39.466307       6 log.go:172] (0xc002f784d0) (0xc0025990e0) Stream added, broadcasting: 5
I0827 01:57:39.467001       6 log.go:172] (0xc002f784d0) Reply frame received for 5
I0827 01:57:39.525473       6 log.go:172] (0xc002f784d0) Data frame received for 5
I0827 01:57:39.525497       6 log.go:172] (0xc0025990e0) (5) Data frame handling
I0827 01:57:39.526412       6 log.go:172] (0xc002f784d0) Data frame received for 3
I0827 01:57:39.526433       6 log.go:172] (0xc0013d8000) (3) Data frame handling
I0827 01:57:39.526446       6 log.go:172] (0xc0013d8000) (3) Data frame sent
I0827 01:57:39.526460       6 log.go:172] (0xc002f784d0) Data frame received for 3
I0827 01:57:39.526471       6 log.go:172] (0xc0013d8000) (3) Data frame handling
I0827 01:57:39.527832       6 log.go:172] (0xc002f784d0) Data frame received for 1
I0827 01:57:39.527852       6 log.go:172] (0xc002598fa0) (1) Data frame handling
I0827 01:57:39.527862       6 log.go:172] (0xc002598fa0) (1) Data frame sent
I0827 01:57:39.527877       6 log.go:172] (0xc002f784d0) (0xc002598fa0) Stream removed, broadcasting: 1
I0827 01:57:39.527893       6 log.go:172] (0xc002f784d0) Go away received
I0827 01:57:39.528008       6 log.go:172] (0xc002f784d0) (0xc002598fa0) Stream removed, broadcasting: 1
I0827 01:57:39.528030       6 log.go:172] (0xc002f784d0) (0xc0013d8000) Stream removed, broadcasting: 3
I0827 01:57:39.528043       6 log.go:172] (0xc002f784d0) (0xc0025990e0) Stream removed, broadcasting: 5
Aug 27 01:57:39.528: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:39.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-162" for this suite.

• [SLOW TEST:28.561 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2939,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:39.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:57:39.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5" in namespace "downward-api-3366" to be "success or failure"
Aug 27 01:57:39.596: INFO: Pod "downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.944172ms
Aug 27 01:57:41.601: INFO: Pod "downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008239656s
Aug 27 01:57:43.607: INFO: Pod "downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014529492s
STEP: Saw pod success
Aug 27 01:57:43.607: INFO: Pod "downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5" satisfied condition "success or failure"
Aug 27 01:57:43.609: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5 container client-container: 
STEP: delete the pod
Aug 27 01:57:43.675: INFO: Waiting for pod downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5 to disappear
Aug 27 01:57:43.686: INFO: Pod downwardapi-volume-467d0614-b1ac-4684-aa85-251beb3c4fc5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:43.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3366" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2946,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:43.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-0077f298-54f4-4677-90f9-266ec13538dd
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:43.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-45" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":172,"skipped":2957,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:43.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 01:57:44.136: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 01:57:46.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 01:57:48.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734090264, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 01:57:51.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:51.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1444" for this suite.
STEP: Destroying namespace "webhook-1444-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.165 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":173,"skipped":2969,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:51.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 01:57:52.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3382'
Aug 27 01:57:52.223: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 01:57:52.223: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Aug 27 01:57:54.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3382'
Aug 27 01:57:54.596: INFO: stderr: ""
Aug 27 01:57:54.596: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:57:54.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3382" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":174,"skipped":2985,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:57:54.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:58:07.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8507" for this suite.

• [SLOW TEST:12.951 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2988,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:58:07.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 27 01:58:07.653: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:58:31.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6467" for this suite.

• [SLOW TEST:24.025 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2998,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:58:31.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:58:31.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a" in namespace "downward-api-9542" to be "success or failure"
Aug 27 01:58:31.742: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.260691ms
Aug 27 01:58:33.825: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093942863s
Aug 27 01:58:36.511: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.779383708s
Aug 27 01:58:38.585: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.853245313s
Aug 27 01:58:40.658: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.926497067s
STEP: Saw pod success
Aug 27 01:58:40.658: INFO: Pod "downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a" satisfied condition "success or failure"
Aug 27 01:58:40.660: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a container client-container: 
STEP: delete the pod
Aug 27 01:58:40.893: INFO: Waiting for pod downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a to disappear
Aug 27 01:58:41.148: INFO: Pod downwardapi-volume-6f36ae43-a161-4f42-8a20-06e6aa3ddd9a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:58:41.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9542" for this suite.

• [SLOW TEST:9.637 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2999,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:58:41.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Aug 27 01:58:49.123: INFO: Successfully updated pod "annotationupdate603db26e-a21f-4b89-99cc-0767f7afec70"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:58:51.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2224" for this suite.

• [SLOW TEST:10.080 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3008,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:58:51.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Aug 27 01:58:57.939: INFO: Pod pod-hostip-6e39cfa5-3bde-4dbd-9a35-25562892d81a has hostIP: 172.18.0.6
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:58:57.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5478" for this suite.

• [SLOW TEST:6.633 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3034,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:58:57.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3902/configmap-test-10d643c1-cedc-4217-8f45-d3485e8970eb
STEP: Creating a pod to test consume configMaps
Aug 27 01:58:58.444: INFO: Waiting up to 5m0s for pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842" in namespace "configmap-3902" to be "success or failure"
Aug 27 01:58:58.450: INFO: Pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842": Phase="Pending", Reason="", readiness=false. Elapsed: 5.830266ms
Aug 27 01:59:00.454: INFO: Pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009940708s
Aug 27 01:59:02.459: INFO: Pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014085969s
Aug 27 01:59:04.463: INFO: Pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01802045s
STEP: Saw pod success
Aug 27 01:59:04.463: INFO: Pod "pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842" satisfied condition "success or failure"
Aug 27 01:59:04.465: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842 container env-test: 
STEP: delete the pod
Aug 27 01:59:04.487: INFO: Waiting for pod pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842 to disappear
Aug 27 01:59:04.497: INFO: Pod pod-configmaps-9abccb72-5deb-4d3a-a15f-d31551d2e842 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:04.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3902" for this suite.

• [SLOW TEST:6.558 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3041,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:04.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-dbefe079-4cfe-44ce-a336-31024f54740a
STEP: Creating a pod to test consume configMaps
Aug 27 01:59:04.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e" in namespace "configmap-4158" to be "success or failure"
Aug 27 01:59:04.783: INFO: Pod "pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.587197ms
Aug 27 01:59:06.786: INFO: Pod "pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041025554s
Aug 27 01:59:08.808: INFO: Pod "pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062261567s
STEP: Saw pod success
Aug 27 01:59:08.808: INFO: Pod "pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e" satisfied condition "success or failure"
Aug 27 01:59:08.811: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e container configmap-volume-test: 
STEP: delete the pod
Aug 27 01:59:09.048: INFO: Waiting for pod pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e to disappear
Aug 27 01:59:09.209: INFO: Pod pod-configmaps-f24a36cc-4e3c-4dd6-889a-70d7f5c3b73e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:09.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4158" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3050,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:09.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:59:09.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d" in namespace "projected-4290" to be "success or failure"
Aug 27 01:59:09.450: INFO: Pod "downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429103ms
Aug 27 01:59:11.501: INFO: Pod "downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05384352s
Aug 27 01:59:13.505: INFO: Pod "downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058598529s
STEP: Saw pod success
Aug 27 01:59:13.506: INFO: Pod "downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d" satisfied condition "success or failure"
Aug 27 01:59:13.509: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d container client-container: 
STEP: delete the pod
Aug 27 01:59:13.575: INFO: Waiting for pod downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d to disappear
Aug 27 01:59:13.661: INFO: Pod downwardapi-volume-1b85f2b8-0e4e-4389-ba56-cf3e888d2a7d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:13.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4290" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3060,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:13.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 01:59:13.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d" in namespace "downward-api-7291" to be "success or failure"
Aug 27 01:59:13.930: INFO: Pod "downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.120679ms
Aug 27 01:59:15.933: INFO: Pod "downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016759718s
Aug 27 01:59:18.257: INFO: Pod "downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339965385s
STEP: Saw pod success
Aug 27 01:59:18.257: INFO: Pod "downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d" satisfied condition "success or failure"
Aug 27 01:59:18.259: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d container client-container: 
STEP: delete the pod
Aug 27 01:59:18.276: INFO: Waiting for pod downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d to disappear
Aug 27 01:59:18.281: INFO: Pod downwardapi-volume-90263c5d-f208-429c-8d1c-ffc5fd83677d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:18.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7291" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3088,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:18.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5184" for this suite.

• [SLOW TEST:24.164 seconds]
[sig-apps] Job
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":184,"skipped":3110,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:42.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-14a7d7a6-1c6d-4b08-8c29-1a57ac273789
STEP: Creating a pod to test consume configMaps
Aug 27 01:59:42.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5" in namespace "projected-4599" to be "success or failure"
Aug 27 01:59:42.651: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092773ms
Aug 27 01:59:44.712: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080974599s
Aug 27 01:59:47.197: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.566269174s
Aug 27 01:59:49.227: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.595617497s
Aug 27 01:59:51.641: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Running", Reason="", readiness=true. Elapsed: 9.009854755s
Aug 27 01:59:54.202: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.571195828s
STEP: Saw pod success
Aug 27 01:59:54.202: INFO: Pod "pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5" satisfied condition "success or failure"
Aug 27 01:59:54.235: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 01:59:54.479: INFO: Waiting for pod pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5 to disappear
Aug 27 01:59:54.599: INFO: Pod pod-projected-configmaps-2ed47489-6171-4d38-a8f7-fa389f73c2e5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:54.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4599" for this suite.

• [SLOW TEST:12.156 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3110,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:54.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-2770658e-7deb-4e91-8597-6dacfd4647d4
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 01:59:55.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3399" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":186,"skipped":3135,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 01:59:55.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:00:13.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6800" for this suite.

• [SLOW TEST:18.109 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":187,"skipped":3139,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:00:13.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 02:00:14.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4062'
Aug 27 02:00:14.554: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 02:00:14.554: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Aug 27 02:00:15.683: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-4wg4s]
Aug 27 02:00:15.683: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-4wg4s" in namespace "kubectl-4062" to be "running and ready"
Aug 27 02:00:15.781: INFO: Pod "e2e-test-httpd-rc-4wg4s": Phase="Pending", Reason="", readiness=false. Elapsed: 97.601447ms
Aug 27 02:00:17.785: INFO: Pod "e2e-test-httpd-rc-4wg4s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102162512s
Aug 27 02:00:20.032: INFO: Pod "e2e-test-httpd-rc-4wg4s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348801934s
Aug 27 02:00:22.227: INFO: Pod "e2e-test-httpd-rc-4wg4s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543576328s
Aug 27 02:00:24.230: INFO: Pod "e2e-test-httpd-rc-4wg4s": Phase="Running", Reason="", readiness=true. Elapsed: 8.547219435s
Aug 27 02:00:24.231: INFO: Pod "e2e-test-httpd-rc-4wg4s" satisfied condition "running and ready"
Aug 27 02:00:24.231: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-4wg4s]
Aug 27 02:00:24.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-4062'
Aug 27 02:00:24.351: INFO: stderr: ""
Aug 27 02:00:24.351: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.21. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.21. Set the 'ServerName' directive globally to suppress this message\n[Thu Aug 27 02:00:20.596872 2020] [mpm_event:notice] [pid 1:tid 140530521332584] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Aug 27 02:00:20.596929 2020] [core:notice] [pid 1:tid 140530521332584] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Aug 27 02:00:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4062'
Aug 27 02:00:24.740: INFO: stderr: ""
Aug 27 02:00:24.740: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:00:24.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4062" for this suite.

• [SLOW TEST:12.358 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":188,"skipped":3143,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:00:25.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 02:00:26.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a" in namespace "downward-api-5962" to be "success or failure"
Aug 27 02:00:26.909: INFO: Pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.675672ms
Aug 27 02:00:29.147: INFO: Pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264395095s
Aug 27 02:00:31.373: INFO: Pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489794138s
Aug 27 02:00:33.377: INFO: Pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.494054845s
STEP: Saw pod success
Aug 27 02:00:33.377: INFO: Pod "downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a" satisfied condition "success or failure"
Aug 27 02:00:33.380: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a container client-container: 
STEP: delete the pod
Aug 27 02:00:33.479: INFO: Waiting for pod downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a to disappear
Aug 27 02:00:33.494: INFO: Pod downwardapi-volume-901ad1d6-065e-4218-b2d6-3c6ec990655a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:00:33.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5962" for this suite.

• [SLOW TEST:7.811 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3173,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:00:33.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-e4ce50d2-0072-41d4-bb61-b371b91eafb2
STEP: Creating a pod to test consume configMaps
Aug 27 02:00:33.663: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8" in namespace "projected-6189" to be "success or failure"
Aug 27 02:00:33.689: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.113645ms
Aug 27 02:00:35.693: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029168336s
Aug 27 02:00:37.696: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032431592s
Aug 27 02:00:39.700: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8": Phase="Running", Reason="", readiness=true. Elapsed: 6.036627105s
Aug 27 02:00:41.704: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040448532s
STEP: Saw pod success
Aug 27 02:00:41.704: INFO: Pod "pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8" satisfied condition "success or failure"
Aug 27 02:00:41.707: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 02:00:41.882: INFO: Waiting for pod pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8 to disappear
Aug 27 02:00:41.920: INFO: Pod pod-projected-configmaps-e8247bcb-4ab6-4126-b63b-a2977e0520d8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:00:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6189" for this suite.

• [SLOW TEST:8.426 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3209,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:00:41.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0827 02:01:22.581383       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 02:01:22.581: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:01:22.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4793" for this suite.

• [SLOW TEST:40.664 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":191,"skipped":3229,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:01:22.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-21d1bdea-ef10-41c8-a44a-e295510e5c68
STEP: Creating secret with name s-test-opt-upd-1ef7e8c9-347d-4de8-993c-30782e638c73
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-21d1bdea-ef10-41c8-a44a-e295510e5c68
STEP: Updating secret s-test-opt-upd-1ef7e8c9-347d-4de8-993c-30782e638c73
STEP: Creating secret with name s-test-opt-create-12e88e80-978b-408f-8d7e-e645401da948
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:02:39.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2333" for this suite.

• [SLOW TEST:77.483 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3259,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:02:40.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 02:02:40.947: INFO: Waiting up to 5m0s for pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75" in namespace "downward-api-9549" to be "success or failure"
Aug 27 02:02:40.963: INFO: Pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 16.335239ms
Aug 27 02:02:43.288: INFO: Pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341483897s
Aug 27 02:02:45.291: INFO: Pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344263198s
Aug 27 02:02:47.294: INFO: Pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.347157474s
STEP: Saw pod success
Aug 27 02:02:47.294: INFO: Pod "downward-api-49cea853-2163-42c3-bbf1-f745b161fb75" satisfied condition "success or failure"
Aug 27 02:02:47.296: INFO: Trying to get logs from node jerma-worker pod downward-api-49cea853-2163-42c3-bbf1-f745b161fb75 container dapi-container: 
STEP: delete the pod
Aug 27 02:02:47.443: INFO: Waiting for pod downward-api-49cea853-2163-42c3-bbf1-f745b161fb75 to disappear
Aug 27 02:02:47.713: INFO: Pod downward-api-49cea853-2163-42c3-bbf1-f745b161fb75 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:02:47.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9549" for this suite.

• [SLOW TEST:7.700 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3265,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:02:47.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:02:48.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2806" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":194,"skipped":3276,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:02:49.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:02:50.093: INFO: Creating ReplicaSet my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee
Aug 27 02:02:50.571: INFO: Pod name my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee: Found 0 pods out of 1
Aug 27 02:02:55.606: INFO: Pod name my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee: Found 1 pods out of 1
Aug 27 02:02:55.606: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee" is running
Aug 27 02:02:55.609: INFO: Pod "my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee-rrmnb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:02:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:02:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:02:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:02:50 +0000 UTC Reason: Message:}])
Aug 27 02:02:55.609: INFO: Trying to dial the pod
Aug 27 02:03:00.616: INFO: Controller my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee: Got expected result from replica 1 [my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee-rrmnb]: "my-hostname-basic-95f3683e-7fd0-4210-b494-dc003c7147ee-rrmnb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:00.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7161" for this suite.

• [SLOW TEST:11.606 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":195,"skipped":3295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:00.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-5rdc7 in namespace proxy-7562
I0827 02:03:00.940900       6 runners.go:189] Created replication controller with name: proxy-service-5rdc7, namespace: proxy-7562, replica count: 1
I0827 02:03:01.991254       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:02.991485       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:03.991654       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:04.991860       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:05.992043       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:06.992223       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:07.992394       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:03:08.992535       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:09.992696       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:10.992955       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:11.993145       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:12.993346       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:13.993538       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:14.993753       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:15.993935       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0827 02:03:16.994126       6 runners.go:189] proxy-service-5rdc7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 02:03:17.295: INFO: setup took 16.521903716s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 27 02:03:17.301: INFO: (0) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 5.553828ms)
Aug 27 02:03:17.309: INFO: (0) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 13.499211ms)
Aug 27 02:03:17.336: INFO: (0) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 41.315199ms)
Aug 27 02:03:17.336: INFO: (0) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 41.132607ms)
Aug 27 02:03:17.336: INFO: (0) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 41.433776ms)
Aug 27 02:03:17.339: INFO: (0) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 44.185296ms)
Aug 27 02:03:17.339: INFO: (0) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 43.98935ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 44.501919ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 44.571454ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 44.688485ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 45.12714ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 45.237246ms)
Aug 27 02:03:17.340: INFO: (0) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 45.207253ms)
Aug 27 02:03:17.341: INFO: (0) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 45.596212ms)
Aug 27 02:03:17.341: INFO: (0) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 45.709137ms)
Aug 27 02:03:17.341: INFO: (0) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: ... (200; 3.378162ms)
Aug 27 02:03:17.346: INFO: (1) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.605876ms)
Aug 27 02:03:17.349: INFO: (1) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 7.696513ms)
Aug 27 02:03:17.349: INFO: (1) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 7.735018ms)
Aug 27 02:03:17.349: INFO: (1) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 7.870042ms)
Aug 27 02:03:17.350: INFO: (1) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 8.828617ms)
Aug 27 02:03:17.350: INFO: (1) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 9.049094ms)
Aug 27 02:03:17.351: INFO: (1) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 10.234017ms)
Aug 27 02:03:17.351: INFO: (1) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 10.287061ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 10.546937ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 10.552267ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 10.548579ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 10.539992ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 10.64144ms)
Aug 27 02:03:17.352: INFO: (1) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 2.646125ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 2.589065ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.053343ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.035231ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test<... (200; 3.069254ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.175519ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.152844ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 3.097245ms)
Aug 27 02:03:17.355: INFO: (2) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.25316ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.445761ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 3.514008ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.474518ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.484086ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.462636ms)
Aug 27 02:03:17.356: INFO: (2) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.585581ms)
Aug 27 02:03:17.358: INFO: (3) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 2.082936ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 2.804072ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 2.85981ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 2.824008ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 3.298637ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.280081ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.247744ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.262694ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.277093ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.263274ms)
Aug 27 02:03:17.359: INFO: (3) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.280194ms)
Aug 27 02:03:17.361: INFO: (3) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 5.545096ms)
Aug 27 02:03:17.363: INFO: (4) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 1.75626ms)
Aug 27 02:03:17.364: INFO: (4) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 2.816216ms)
Aug 27 02:03:17.364: INFO: (4) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 2.8343ms)
Aug 27 02:03:17.364: INFO: (4) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 2.858968ms)
Aug 27 02:03:17.365: INFO: (4) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.155841ms)
Aug 27 02:03:17.365: INFO: (4) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.20999ms)
Aug 27 02:03:17.365: INFO: (4) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 2.677652ms)
Aug 27 02:03:17.368: INFO: (5) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 2.774248ms)
Aug 27 02:03:17.368: INFO: (5) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.047369ms)
Aug 27 02:03:17.368: INFO: (5) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.112258ms)
Aug 27 02:03:17.368: INFO: (5) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.454392ms)
Aug 27 02:03:17.368: INFO: (5) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test<... (200; 3.586021ms)
Aug 27 02:03:17.369: INFO: (5) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.605481ms)
Aug 27 02:03:17.369: INFO: (5) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.812773ms)
Aug 27 02:03:17.370: INFO: (6) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 1.489973ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 3.985869ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.09863ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.333821ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 4.413624ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 4.359776ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.375935ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 4.382873ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 4.52452ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 4.517122ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 4.54103ms)
Aug 27 02:03:17.373: INFO: (6) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 4.691526ms)
Aug 27 02:03:17.374: INFO: (6) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 4.774958ms)
Aug 27 02:03:17.374: INFO: (6) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 4.70982ms)
Aug 27 02:03:17.374: INFO: (6) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: ... (200; 3.567511ms)
Aug 27 02:03:17.377: INFO: (7) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.5833ms)
Aug 27 02:03:17.377: INFO: (7) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.621812ms)
Aug 27 02:03:17.377: INFO: (7) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.572999ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 3.944991ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.938069ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.952576ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 3.940865ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.983369ms)
Aug 27 02:03:17.378: INFO: (7) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 4.093425ms)
Aug 27 02:03:17.381: INFO: (8) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.030224ms)
Aug 27 02:03:17.381: INFO: (8) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.002893ms)
Aug 27 02:03:17.381: INFO: (8) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.0126ms)
Aug 27 02:03:17.381: INFO: (8) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.058935ms)
Aug 27 02:03:17.381: INFO: (8) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 3.944711ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 3.909392ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.913641ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.893609ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 3.957026ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.03543ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.992606ms)
Aug 27 02:03:17.382: INFO: (8) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 4.049352ms)
Aug 27 02:03:17.384: INFO: (9) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test<... (200; 3.515641ms)
Aug 27 02:03:17.385: INFO: (9) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 3.550835ms)
Aug 27 02:03:17.385: INFO: (9) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.628351ms)
Aug 27 02:03:17.385: INFO: (9) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 3.557298ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.676845ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.644035ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.669257ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.658611ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.836399ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.788458ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.859224ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 3.935172ms)
Aug 27 02:03:17.386: INFO: (9) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.898411ms)
Aug 27 02:03:17.388: INFO: (10) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 1.809286ms)
Aug 27 02:03:17.388: INFO: (10) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 2.014291ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 2.47607ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 2.442411ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 2.554416ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 2.398624ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 2.649849ms)
Aug 27 02:03:17.389: INFO: (10) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.658888ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.204752ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.285342ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 2.93115ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 3.656439ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.607059ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: ... (200; 3.150468ms)
Aug 27 02:03:17.390: INFO: (10) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.254425ms)
Aug 27 02:03:17.392: INFO: (11) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 2.792818ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 3.146823ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.312137ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.36932ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.360609ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.431926ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.3737ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 3.384166ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.552742ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.525401ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 3.607841ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.602678ms)
Aug 27 02:03:17.393: INFO: (11) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 255.225291ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 255.190535ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 255.263519ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 255.316943ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 255.243851ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 255.346719ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test<... (200; 255.270057ms)
Aug 27 02:03:17.649: INFO: (12) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 255.452125ms)
Aug 27 02:03:17.651: INFO: (12) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 256.980607ms)
Aug 27 02:03:17.651: INFO: (12) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 256.96826ms)
Aug 27 02:03:17.654: INFO: (12) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 260.779594ms)
Aug 27 02:03:17.655: INFO: (12) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 260.981376ms)
Aug 27 02:03:17.655: INFO: (12) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 260.999108ms)
Aug 27 02:03:17.655: INFO: (12) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 261.635977ms)
Aug 27 02:03:17.659: INFO: (13) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.584149ms)
Aug 27 02:03:17.659: INFO: (13) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 4.024495ms)
Aug 27 02:03:17.659: INFO: (13) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.080517ms)
Aug 27 02:03:17.659: INFO: (13) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 4.063195ms)
Aug 27 02:03:17.659: INFO: (13) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 4.231837ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 4.237405ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.172313ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.326517ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.443033ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 4.339458ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 4.391589ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 4.369325ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 4.453355ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 4.43839ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 4.52611ms)
Aug 27 02:03:17.660: INFO: (13) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test<... (200; 3.323074ms)
Aug 27 02:03:17.663: INFO: (14) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.347479ms)
Aug 27 02:03:17.663: INFO: (14) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 3.427794ms)
Aug 27 02:03:17.664: INFO: (14) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.086189ms)
Aug 27 02:03:17.664: INFO: (14) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.1739ms)
Aug 27 02:03:17.665: INFO: (14) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.828527ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 5.648573ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 5.716354ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 5.754701ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 5.739844ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 5.763797ms)
Aug 27 02:03:17.666: INFO: (14) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 5.890469ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.544162ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.533546ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 3.513273ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.503014ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 3.522466ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.554831ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 3.588132ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 3.577134ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.572006ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 3.567513ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.588076ms)
Aug 27 02:03:17.669: INFO: (15) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 3.724852ms)
Aug 27 02:03:17.670: INFO: (15) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.744494ms)
Aug 27 02:03:17.670: INFO: (15) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.900964ms)
Aug 27 02:03:17.672: INFO: (16) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 2.387716ms)
Aug 27 02:03:17.672: INFO: (16) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 2.37689ms)
Aug 27 02:03:17.672: INFO: (16) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 2.487299ms)
Aug 27 02:03:17.672: INFO: (16) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 2.426693ms)
Aug 27 02:03:17.672: INFO: (16) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 2.530044ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 2.911471ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 3.350504ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.437375ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.354215ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname1/proxy/: foo (200; 3.52337ms)
Aug 27 02:03:17.673: INFO: (16) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 3.513981ms)
Aug 27 02:03:17.674: INFO: (16) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 3.651759ms)
Aug 27 02:03:17.674: INFO: (16) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 4.227612ms)
Aug 27 02:03:17.677: INFO: (17) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 3.276871ms)
Aug 27 02:03:17.677: INFO: (17) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 3.308808ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 3.310613ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 3.772117ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 3.802332ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 4.061964ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 4.097096ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:1080/proxy/: ... (200; 4.18685ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 4.21049ms)
Aug 27 02:03:17.678: INFO: (17) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: ... (200; 2.570662ms)
Aug 27 02:03:17.681: INFO: (18) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 2.55372ms)
Aug 27 02:03:17.681: INFO: (18) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf/proxy/: test (200; 2.712425ms)
Aug 27 02:03:17.681: INFO: (18) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 2.762669ms)
Aug 27 02:03:17.681: INFO: (18) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 2.822544ms)
Aug 27 02:03:17.682: INFO: (18) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 3.116314ms)
Aug 27 02:03:17.682: INFO: (18) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 3.060859ms)
Aug 27 02:03:17.682: INFO: (18) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: ... (200; 5.294021ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 5.314681ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:460/proxy/: tls baz (200; 5.377417ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:462/proxy/: tls qux (200; 5.37814ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname1/proxy/: tls baz (200; 5.322915ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:1080/proxy/: test<... (200; 5.287054ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/services/https:proxy-service-5rdc7:tlsportname2/proxy/: tls qux (200; 5.301348ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/proxy-service-5rdc7-8zvbf:162/proxy/: bar (200; 5.365121ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname1/proxy/: foo (200; 5.379605ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/https:proxy-service-5rdc7-8zvbf:443/proxy/: test (200; 5.590334ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/services/http:proxy-service-5rdc7:portname2/proxy/: bar (200; 5.575636ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/pods/http:proxy-service-5rdc7-8zvbf:160/proxy/: foo (200; 5.571353ms)
Aug 27 02:03:17.688: INFO: (19) /api/v1/namespaces/proxy-7562/services/proxy-service-5rdc7:portname2/proxy/: bar (200; 5.618545ms)
STEP: deleting ReplicationController proxy-service-5rdc7 in namespace proxy-7562, will wait for the garbage collector to delete the pods
Aug 27 02:03:17.745: INFO: Deleting ReplicationController proxy-service-5rdc7 took: 4.9135ms
Aug 27 02:03:18.045: INFO: Terminating ReplicationController proxy-service-5rdc7 pods took: 300.178241ms
[AfterEach] version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:21.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7562" for this suite.

• [SLOW TEST:21.428 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":196,"skipped":3331,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:22.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-59e22cee-5296-4e5f-883c-ce746b1bbcb8
STEP: Creating a pod to test consume secrets
Aug 27 02:03:22.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b" in namespace "projected-3005" to be "success or failure"
Aug 27 02:03:22.881: INFO: Pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b": Phase="Pending", Reason="", readiness=false. Elapsed: 142.908403ms
Aug 27 02:03:24.885: INFO: Pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146189891s
Aug 27 02:03:26.889: INFO: Pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150459922s
Aug 27 02:03:28.924: INFO: Pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185148844s
STEP: Saw pod success
Aug 27 02:03:28.924: INFO: Pod "pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b" satisfied condition "success or failure"
Aug 27 02:03:28.927: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 02:03:28.947: INFO: Waiting for pod pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b to disappear
Aug 27 02:03:29.018: INFO: Pod pod-projected-secrets-14e52a0b-e296-4168-b6d6-0510e43b717b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:29.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3005" for this suite.

• [SLOW TEST:6.995 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3346,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:29.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:41.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1456" for this suite.

• [SLOW TEST:12.166 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":198,"skipped":3379,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:41.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:50.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2760" for this suite.

• [SLOW TEST:8.958 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":199,"skipped":3394,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:50.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 27 02:03:50.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7714 /api/v1/namespaces/watch-7714/configmaps/e2e-watch-test-resource-version 3a61b492-c809-45dd-a144-100eace7ef13 4093532 0 2020-08-27 02:03:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 02:03:50.297: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7714 /api/v1/namespaces/watch-7714/configmaps/e2e-watch-test-resource-version 3a61b492-c809-45dd-a144-100eace7ef13 4093533 0 2020-08-27 02:03:50 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:03:50.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7714" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":200,"skipped":3408,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:03:50.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-7771d7bb-9f70-4beb-92e2-52b1f7c9ef34
STEP: Creating configMap with name cm-test-opt-upd-58d48c60-b8c2-475e-8fd1-f5ae46164f35
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7771d7bb-9f70-4beb-92e2-52b1f7c9ef34
STEP: Updating configmap cm-test-opt-upd-58d48c60-b8c2-475e-8fd1-f5ae46164f35
STEP: Creating configMap with name cm-test-opt-create-edaeb829-d450-4c37-96b1-c4d6afac5759
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:04:01.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6774" for this suite.

• [SLOW TEST:10.939 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3412,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:04:01.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3153/configmap-test-52d98e5c-6521-44fc-91f9-a036359d5c27
STEP: Creating a pod to test consume configMaps
Aug 27 02:04:01.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f" in namespace "configmap-3153" to be "success or failure"
Aug 27 02:04:01.726: INFO: Pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.850283ms
Aug 27 02:04:03.729: INFO: Pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015788404s
Aug 27 02:04:05.757: INFO: Pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043660588s
Aug 27 02:04:07.900: INFO: Pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187386554s
STEP: Saw pod success
Aug 27 02:04:07.900: INFO: Pod "pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f" satisfied condition "success or failure"
Aug 27 02:04:07.903: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f container env-test: 
STEP: delete the pod
Aug 27 02:04:08.500: INFO: Waiting for pod pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f to disappear
Aug 27 02:04:08.678: INFO: Pod pod-configmaps-7ec1d98c-1b68-4af6-8cf7-f3026fd72e5f no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:04:08.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3153" for this suite.

• [SLOW TEST:7.443 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3419,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:04:08.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 27 02:04:09.033: INFO: Waiting up to 5m0s for pod "pod-ea434ec6-d304-4e72-b87d-2f132c0534d7" in namespace "emptydir-3317" to be "success or failure"
Aug 27 02:04:09.379: INFO: Pod "pod-ea434ec6-d304-4e72-b87d-2f132c0534d7": Phase="Pending", Reason="", readiness=false. Elapsed: 346.231272ms
Aug 27 02:04:11.487: INFO: Pod "pod-ea434ec6-d304-4e72-b87d-2f132c0534d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453719962s
Aug 27 02:04:13.491: INFO: Pod "pod-ea434ec6-d304-4e72-b87d-2f132c0534d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.457840946s
STEP: Saw pod success
Aug 27 02:04:13.491: INFO: Pod "pod-ea434ec6-d304-4e72-b87d-2f132c0534d7" satisfied condition "success or failure"
Aug 27 02:04:13.494: INFO: Trying to get logs from node jerma-worker2 pod pod-ea434ec6-d304-4e72-b87d-2f132c0534d7 container test-container: 
STEP: delete the pod
Aug 27 02:04:13.542: INFO: Waiting for pod pod-ea434ec6-d304-4e72-b87d-2f132c0534d7 to disappear
Aug 27 02:04:13.552: INFO: Pod pod-ea434ec6-d304-4e72-b87d-2f132c0534d7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:04:13.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3317" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3419,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:04:13.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 27 02:04:13.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:04:28.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4769" for this suite.

• [SLOW TEST:14.651 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":204,"skipped":3423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:04:28.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4474.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4474.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4474.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4474.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4474.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4474.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 02:04:38.487: INFO: DNS probes using dns-4474/dns-test-5a42db0c-25dd-4a9a-a9a8-adaeb08e5086 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:04:38.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4474" for this suite.

• [SLOW TEST:11.375 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":205,"skipped":3474,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:04:39.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 27 02:04:59.122: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:04:59.126: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:01.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:01.184: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:03.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:03.130: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:05.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:05.131: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:07.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:07.130: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:09.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:09.130: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:11.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:11.317: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 27 02:05:13.126: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 27 02:05:13.130: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:05:13.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-71" for this suite.

• [SLOW TEST:33.552 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3477,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:05:13.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Aug 27 02:05:13.857: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix301087193/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:05:13.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2040" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":207,"skipped":3484,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:05:13.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 02:05:14.694: INFO: Waiting up to 5m0s for pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576" in namespace "downward-api-7141" to be "success or failure"
Aug 27 02:05:14.778: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576": Phase="Pending", Reason="", readiness=false. Elapsed: 83.645937ms
Aug 27 02:05:16.793: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098910671s
Aug 27 02:05:18.975: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280630771s
Aug 27 02:05:21.427: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576": Phase="Pending", Reason="", readiness=false. Elapsed: 6.733269894s
Aug 27 02:05:23.557: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.862612497s
STEP: Saw pod success
Aug 27 02:05:23.557: INFO: Pod "downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576" satisfied condition "success or failure"
Aug 27 02:05:23.560: INFO: Trying to get logs from node jerma-worker pod downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576 container dapi-container: 
STEP: delete the pod
Aug 27 02:05:24.099: INFO: Waiting for pod downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576 to disappear
Aug 27 02:05:24.164: INFO: Pod downward-api-b7b671a7-6112-4cba-a3a6-bdba60636576 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:05:24.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7141" for this suite.

• [SLOW TEST:10.233 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:05:24.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-581386f6-45b0-460f-ab8f-b015e8a71f22
STEP: Creating a pod to test consume configMaps
Aug 27 02:05:25.610: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4" in namespace "projected-220" to be "success or failure"
Aug 27 02:05:25.613: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66564ms
Aug 27 02:05:27.616: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006192258s
Aug 27 02:05:30.142: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531436046s
Aug 27 02:05:32.243: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633000636s
Aug 27 02:05:34.476: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.865405069s
STEP: Saw pod success
Aug 27 02:05:34.476: INFO: Pod "pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4" satisfied condition "success or failure"
Aug 27 02:05:34.566: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 27 02:05:35.507: INFO: Waiting for pod pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4 to disappear
Aug 27 02:05:35.751: INFO: Pod pod-projected-configmaps-81ddd88f-3883-4604-8a62-71f3f0cca8f4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:05:35.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-220" for this suite.

• [SLOW TEST:11.589 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3509,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:05:35.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Aug 27 02:05:36.168: INFO: Waiting up to 5m0s for pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965" in namespace "containers-471" to be "success or failure"
Aug 27 02:05:36.303: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965": Phase="Pending", Reason="", readiness=false. Elapsed: 134.279652ms
Aug 27 02:05:38.307: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138455223s
Aug 27 02:05:40.423: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254883639s
Aug 27 02:05:42.680: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965": Phase="Running", Reason="", readiness=true. Elapsed: 6.511174127s
Aug 27 02:05:44.775: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.606803352s
STEP: Saw pod success
Aug 27 02:05:44.775: INFO: Pod "client-containers-83429d30-2340-4523-99e8-dbb7490d2965" satisfied condition "success or failure"
Aug 27 02:05:45.015: INFO: Trying to get logs from node jerma-worker pod client-containers-83429d30-2340-4523-99e8-dbb7490d2965 container test-container: 
STEP: delete the pod
Aug 27 02:05:45.684: INFO: Waiting for pod client-containers-83429d30-2340-4523-99e8-dbb7490d2965 to disappear
Aug 27 02:05:45.741: INFO: Pod client-containers-83429d30-2340-4523-99e8-dbb7490d2965 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:05:45.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-471" for this suite.

• [SLOW TEST:9.989 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:05:45.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-d7a5db66-eba6-4410-a078-66cdc7ff035d in namespace container-probe-2097
Aug 27 02:05:52.625: INFO: Started pod busybox-d7a5db66-eba6-4410-a078-66cdc7ff035d in namespace container-probe-2097
STEP: checking the pod's current state and verifying that restartCount is present
Aug 27 02:05:52.627: INFO: Initial restart count of pod busybox-d7a5db66-eba6-4410-a078-66cdc7ff035d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:09:52.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2097" for this suite.

• [SLOW TEST:247.154 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3538,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:09:52.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 27 02:09:52.965: INFO: Waiting up to 5m0s for pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5" in namespace "emptydir-9683" to be "success or failure"
Aug 27 02:09:52.976: INFO: Pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900459ms
Aug 27 02:09:54.981: INFO: Pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015063291s
Aug 27 02:09:56.985: INFO: Pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.019016336s
Aug 27 02:09:58.989: INFO: Pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02322148s
STEP: Saw pod success
Aug 27 02:09:58.989: INFO: Pod "pod-c37021bb-b332-40d0-855b-8b3b50c168c5" satisfied condition "success or failure"
Aug 27 02:09:58.992: INFO: Trying to get logs from node jerma-worker pod pod-c37021bb-b332-40d0-855b-8b3b50c168c5 container test-container: 
STEP: delete the pod
Aug 27 02:09:59.029: INFO: Waiting for pod pod-c37021bb-b332-40d0-855b-8b3b50c168c5 to disappear
Aug 27 02:09:59.034: INFO: Pod pod-c37021bb-b332-40d0-855b-8b3b50c168c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:09:59.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9683" for this suite.

• [SLOW TEST:6.137 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:09:59.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 27 02:09:59.191: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 02:09:59.203: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 02:09:59.205: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 27 02:09:59.211: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.211: INFO: 	Container app ready: true, restart count 0
Aug 27 02:09:59.211: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.211: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 02:09:59.211: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.211: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:09:59.211: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 27 02:09:59.226: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.226: INFO: 	Container httpd ready: true, restart count 0
Aug 27 02:09:59.226: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.227: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:09:59.227: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.227: INFO: 	Container app ready: true, restart count 0
Aug 27 02:09:59.227: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:09:59.227: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Aug 27 02:09:59.334: INFO: Pod daemon-set-4l8wc requesting resource cpu=0m on Node jerma-worker
Aug 27 02:09:59.334: INFO: Pod daemon-set-cxv46 requesting resource cpu=0m on Node jerma-worker2
Aug 27 02:09:59.334: INFO: Pod test-recreate-deployment-5f94c574ff-k4dkm requesting resource cpu=0m on Node jerma-worker2
Aug 27 02:09:59.334: INFO: Pod kindnet-gxck9 requesting resource cpu=100m on Node jerma-worker2
Aug 27 02:09:59.334: INFO: Pod kindnet-tfrcx requesting resource cpu=100m on Node jerma-worker
Aug 27 02:09:59.334: INFO: Pod kube-proxy-ckhpn requesting resource cpu=0m on Node jerma-worker2
Aug 27 02:09:59.334: INFO: Pod kube-proxy-lgd85 requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 27 02:09:59.334: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Aug 27 02:09:59.339: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9.162efe0153e18e65], Reason = [Scheduled], Message = [Successfully assigned sched-pred-94/filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9.162efe01d8a8c6b3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9.162efe0257924713], Reason = [Created], Message = [Created container filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9.162efe026edaeb48], Reason = [Started], Message = [Started container filler-pod-971b04b6-01d6-4afc-ac46-af43b2cd71c9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4.162efe014fe2cef6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-94/filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4.162efe0199e4c0f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4.162efe01fb99d220], Reason = [Created], Message = [Created container filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4.162efe0213300909], Reason = [Started], Message = [Started container filler-pod-feb4d03f-5a7b-4827-8a1a-b9a6e84587e4]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162efe02ba738c27], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162efe02bd734c18], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:10:06.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-94" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.531 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":213,"skipped":3565,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:10:06.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:10:06.677: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 27 02:10:06.701: INFO: Number of nodes with available pods: 0
Aug 27 02:10:06.701: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 27 02:10:06.771: INFO: Number of nodes with available pods: 0
Aug 27 02:10:06.771: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:07.775: INFO: Number of nodes with available pods: 0
Aug 27 02:10:07.775: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:08.784: INFO: Number of nodes with available pods: 0
Aug 27 02:10:08.784: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:09.822: INFO: Number of nodes with available pods: 1
Aug 27 02:10:09.822: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 27 02:10:09.889: INFO: Number of nodes with available pods: 1
Aug 27 02:10:09.889: INFO: Number of running nodes: 0, number of available pods: 1
Aug 27 02:10:10.893: INFO: Number of nodes with available pods: 0
Aug 27 02:10:10.893: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 27 02:10:11.053: INFO: Number of nodes with available pods: 0
Aug 27 02:10:11.053: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:12.059: INFO: Number of nodes with available pods: 0
Aug 27 02:10:12.059: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:13.402: INFO: Number of nodes with available pods: 0
Aug 27 02:10:13.402: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:14.056: INFO: Number of nodes with available pods: 0
Aug 27 02:10:14.056: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:15.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:15.057: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:16.058: INFO: Number of nodes with available pods: 0
Aug 27 02:10:16.058: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:17.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:17.058: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:18.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:18.057: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:19.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:19.057: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:20.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:20.057: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:21.058: INFO: Number of nodes with available pods: 0
Aug 27 02:10:21.058: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:22.197: INFO: Number of nodes with available pods: 0
Aug 27 02:10:22.197: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:23.058: INFO: Number of nodes with available pods: 0
Aug 27 02:10:23.058: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:24.273: INFO: Number of nodes with available pods: 0
Aug 27 02:10:24.273: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:25.057: INFO: Number of nodes with available pods: 0
Aug 27 02:10:25.057: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:10:26.059: INFO: Number of nodes with available pods: 1
Aug 27 02:10:26.059: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3489, will wait for the garbage collector to delete the pods
Aug 27 02:10:26.161: INFO: Deleting DaemonSet.extensions daemon-set took: 45.259356ms
Aug 27 02:10:26.461: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292418ms
Aug 27 02:10:41.664: INFO: Number of nodes with available pods: 0
Aug 27 02:10:41.664: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 02:10:41.666: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3489/daemonsets","resourceVersion":"4095115"},"items":null}

Aug 27 02:10:41.668: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3489/pods","resourceVersion":"4095115"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:10:41.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3489" for this suite.

• [SLOW TEST:35.149 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":214,"skipped":3571,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:10:41.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:10:46.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6642" for this suite.

• [SLOW TEST:5.168 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":215,"skipped":3572,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:10:46.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e19f5810-cd0b-4c07-9d4e-97a802ab20b5
STEP: Creating a pod to test consume secrets
Aug 27 02:10:47.007: INFO: Waiting up to 5m0s for pod "pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980" in namespace "secrets-230" to be "success or failure"
Aug 27 02:10:47.032: INFO: Pod "pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980": Phase="Pending", Reason="", readiness=false. Elapsed: 25.168824ms
Aug 27 02:10:49.036: INFO: Pod "pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02924295s
Aug 27 02:10:51.040: INFO: Pod "pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033403248s
STEP: Saw pod success
Aug 27 02:10:51.040: INFO: Pod "pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980" satisfied condition "success or failure"
Aug 27 02:10:51.043: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980 container secret-volume-test: 
STEP: delete the pod
Aug 27 02:10:51.083: INFO: Waiting for pod pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980 to disappear
Aug 27 02:10:51.095: INFO: Pod pod-secrets-42aafa82-e89a-412a-bd25-a72785ce2980 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:10:51.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-230" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3573,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:10:51.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:10:55.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-532" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3580,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:10:55.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-9107e1ba-1b50-4d8d-92c7-9c72c52b291e
STEP: Creating secret with name s-test-opt-upd-6a19d010-6e9c-43ab-88af-08ff224be412
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9107e1ba-1b50-4d8d-92c7-9c72c52b291e
STEP: Updating secret s-test-opt-upd-6a19d010-6e9c-43ab-88af-08ff224be412
STEP: Creating secret with name s-test-opt-create-7d83a604-ca9d-40a9-b640-70ae4027e198
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:06.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3420" for this suite.

• [SLOW TEST:71.290 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3611,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:06.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 27 02:12:06.592: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:14.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6336" for this suite.

• [SLOW TEST:8.453 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":219,"skipped":3637,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:14.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Aug 27 02:12:15.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:31.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6065" for this suite.

• [SLOW TEST:16.667 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":220,"skipped":3649,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:31.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f9df0b2a-ab2e-418c-a48a-0313189d9168
STEP: Creating a pod to test consume secrets
Aug 27 02:12:31.698: INFO: Waiting up to 5m0s for pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e" in namespace "secrets-5937" to be "success or failure"
Aug 27 02:12:31.705: INFO: Pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484424ms
Aug 27 02:12:33.779: INFO: Pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08065844s
Aug 27 02:12:35.783: INFO: Pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e": Phase="Running", Reason="", readiness=true. Elapsed: 4.08471429s
Aug 27 02:12:37.787: INFO: Pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088786608s
STEP: Saw pod success
Aug 27 02:12:37.787: INFO: Pod "pod-secrets-716783af-8492-4571-b4bd-0455ae29144e" satisfied condition "success or failure"
Aug 27 02:12:37.790: INFO: Trying to get logs from node jerma-worker pod pod-secrets-716783af-8492-4571-b4bd-0455ae29144e container secret-volume-test: 
STEP: delete the pod
Aug 27 02:12:37.838: INFO: Waiting for pod pod-secrets-716783af-8492-4571-b4bd-0455ae29144e to disappear
Aug 27 02:12:37.848: INFO: Pod pod-secrets-716783af-8492-4571-b4bd-0455ae29144e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:37.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5937" for this suite.

• [SLOW TEST:6.251 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3650,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:37.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 02:12:37.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335" in namespace "downward-api-2417" to be "success or failure"
Aug 27 02:12:37.956: INFO: Pod "downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335": Phase="Pending", Reason="", readiness=false. Elapsed: 3.398112ms
Aug 27 02:12:39.959: INFO: Pod "downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006676156s
Aug 27 02:12:42.002: INFO: Pod "downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049899007s
STEP: Saw pod success
Aug 27 02:12:42.002: INFO: Pod "downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335" satisfied condition "success or failure"
Aug 27 02:12:42.004: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335 container client-container: 
STEP: delete the pod
Aug 27 02:12:42.122: INFO: Waiting for pod downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335 to disappear
Aug 27 02:12:42.133: INFO: Pod downwardapi-volume-ca70b746-3853-4897-ab34-9bf385526335 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2417" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3680,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:42.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-3be5605f-9896-4cd2-8574-1a3a2ed88a3f
STEP: Creating a pod to test consume secrets
Aug 27 02:12:42.249: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef" in namespace "projected-9519" to be "success or failure"
Aug 27 02:12:42.254: INFO: Pod "pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175288ms
Aug 27 02:12:44.272: INFO: Pod "pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022659694s
Aug 27 02:12:46.366: INFO: Pod "pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117012371s
STEP: Saw pod success
Aug 27 02:12:46.366: INFO: Pod "pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef" satisfied condition "success or failure"
Aug 27 02:12:46.370: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef container secret-volume-test: 
STEP: delete the pod
Aug 27 02:12:46.632: INFO: Waiting for pod pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef to disappear
Aug 27 02:12:46.761: INFO: Pod pod-projected-secrets-5ddcfe31-5749-4e7c-bc7a-2812268881ef no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:46.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9519" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3681,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:46.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d
Aug 27 02:12:46.924: INFO: Pod name my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d: Found 0 pods out of 1
Aug 27 02:12:51.975: INFO: Pod name my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d: Found 1 pods out of 1
Aug 27 02:12:51.975: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d" are running
Aug 27 02:12:52.030: INFO: Pod "my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d-djd9r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:12:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:12:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:12:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-27 02:12:46 +0000 UTC Reason: Message:}])
Aug 27 02:12:52.030: INFO: Trying to dial the pod
Aug 27 02:12:57.040: INFO: Controller my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d: Got expected result from replica 1 [my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d-djd9r]: "my-hostname-basic-b1aa1a32-1cc8-4d42-b077-9eb174746e3d-djd9r", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:12:57.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6560" for this suite.

• [SLOW TEST:10.279 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":224,"skipped":3687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:12:57.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 27 02:13:15.455: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6199 PodName:pod-sharedvolume-1b331191-fc5d-4294-a199-bac89b64e472 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 02:13:15.455: INFO: >>> kubeConfig: /root/.kube/config
I0827 02:13:15.481231       6 log.go:172] (0xc002a4a630) (0xc002b285a0) Create stream
I0827 02:13:15.481278       6 log.go:172] (0xc002a4a630) (0xc002b285a0) Stream added, broadcasting: 1
I0827 02:13:15.483399       6 log.go:172] (0xc002a4a630) Reply frame received for 1
I0827 02:13:15.483451       6 log.go:172] (0xc002a4a630) (0xc002b286e0) Create stream
I0827 02:13:15.483465       6 log.go:172] (0xc002a4a630) (0xc002b286e0) Stream added, broadcasting: 3
I0827 02:13:15.484839       6 log.go:172] (0xc002a4a630) Reply frame received for 3
I0827 02:13:15.484868       6 log.go:172] (0xc002a4a630) (0xc002b28780) Create stream
I0827 02:13:15.484878       6 log.go:172] (0xc002a4a630) (0xc002b28780) Stream added, broadcasting: 5
I0827 02:13:15.485900       6 log.go:172] (0xc002a4a630) Reply frame received for 5
I0827 02:13:15.540798       6 log.go:172] (0xc002a4a630) Data frame received for 5
I0827 02:13:15.540864       6 log.go:172] (0xc002b28780) (5) Data frame handling
I0827 02:13:15.540890       6 log.go:172] (0xc002a4a630) Data frame received for 3
I0827 02:13:15.540900       6 log.go:172] (0xc002b286e0) (3) Data frame handling
I0827 02:13:15.540924       6 log.go:172] (0xc002b286e0) (3) Data frame sent
I0827 02:13:15.540933       6 log.go:172] (0xc002a4a630) Data frame received for 3
I0827 02:13:15.540943       6 log.go:172] (0xc002b286e0) (3) Data frame handling
I0827 02:13:15.542177       6 log.go:172] (0xc002a4a630) Data frame received for 1
I0827 02:13:15.542198       6 log.go:172] (0xc002b285a0) (1) Data frame handling
I0827 02:13:15.542217       6 log.go:172] (0xc002b285a0) (1) Data frame sent
I0827 02:13:15.542228       6 log.go:172] (0xc002a4a630) (0xc002b285a0) Stream removed, broadcasting: 1
I0827 02:13:15.542238       6 log.go:172] (0xc002a4a630) Go away received
I0827 02:13:15.542388       6 log.go:172] (0xc002a4a630) (0xc002b285a0) Stream removed, broadcasting: 1
I0827 02:13:15.542418       6 log.go:172] (0xc002a4a630) (0xc002b286e0) Stream removed, broadcasting: 3
I0827 02:13:15.542428       6 log.go:172] (0xc002a4a630) (0xc002b28780) Stream removed, broadcasting: 5
Aug 27 02:13:15.542: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:13:15.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6199" for this suite.

• [SLOW TEST:18.503 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":225,"skipped":3716,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:13:15.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:13:16.200: INFO: Creating deployment "test-recreate-deployment"
Aug 27 02:13:16.205: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 27 02:13:16.869: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 27 02:13:18.875: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 27 02:13:18.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091196, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091196, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091197, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091196, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:13:20.902: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 27 02:13:20.926: INFO: Updating deployment test-recreate-deployment
Aug 27 02:13:20.926: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Aug 27 02:13:21.674: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-7160 /apis/apps/v1/namespaces/deployment-7160/deployments/test-recreate-deployment 96b7568e-b315-4e9c-9295-729ec2dd6810 4095952 2 2020-08-27 02:13:16 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cb3a78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-27 02:13:21 +0000 UTC,LastTransitionTime:2020-08-27 02:13:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-08-27 02:13:21 +0000 UTC,LastTransitionTime:2020-08-27 02:13:16 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 27 02:13:21.706: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-7160 /apis/apps/v1/namespaces/deployment-7160/replicasets/test-recreate-deployment-5f94c574ff 83e1096c-a124-4b6e-8f54-8c6f2bd41f65 4095951 1 2020-08-27 02:13:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 96b7568e-b315-4e9c-9295-729ec2dd6810 0xc003cb3f27 0xc003cb3f28}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003cb3fb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 02:13:21.706: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 27 02:13:21.706: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-7160 /apis/apps/v1/namespaces/deployment-7160/replicasets/test-recreate-deployment-799c574856 28cc69b6-df46-4d5d-8b8e-02c475b82074 4095941 2 2020-08-27 02:13:16 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 96b7568e-b315-4e9c-9295-729ec2dd6810 0xc003c80037 0xc003c80038}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c800a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 27 02:13:21.800: INFO: Pod "test-recreate-deployment-5f94c574ff-svchq" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-svchq test-recreate-deployment-5f94c574ff- deployment-7160 /api/v1/namespaces/deployment-7160/pods/test-recreate-deployment-5f94c574ff-svchq c4fc61b9-cb7b-40da-9423-163e7b6ddc93 4095953 0 2020-08-27 02:13:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 83e1096c-a124-4b6e-8f54-8c6f2bd41f65 0xc003c80677 0xc003c80678}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x8bjf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x8bjf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x8bjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 02:13:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 02:13:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 02:13:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-27 02:13:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-08-27 02:13:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:13:21.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7160" for this suite.

• [SLOW TEST:6.270 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":226,"skipped":3761,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:13:21.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:14:22.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8823" for this suite.

• [SLOW TEST:60.409 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3770,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:14:22.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:14:22.315: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:14:23.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6526" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":228,"skipped":3788,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:14:23.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0827 02:14:55.799904       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 27 02:14:55.799: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:14:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1932" for this suite.

• [SLOW TEST:32.495 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":229,"skipped":3816,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:14:56.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:14:57.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 27 02:14:59.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 create -f -'
Aug 27 02:15:05.213: INFO: stderr: ""
Aug 27 02:15:05.213: INFO: stdout: "e2e-test-crd-publish-openapi-7797-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 27 02:15:05.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 delete e2e-test-crd-publish-openapi-7797-crds test-foo'
Aug 27 02:15:05.311: INFO: stderr: ""
Aug 27 02:15:05.312: INFO: stdout: "e2e-test-crd-publish-openapi-7797-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 27 02:15:05.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 apply -f -'
Aug 27 02:15:05.677: INFO: stderr: ""
Aug 27 02:15:05.677: INFO: stdout: "e2e-test-crd-publish-openapi-7797-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 27 02:15:05.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 delete e2e-test-crd-publish-openapi-7797-crds test-foo'
Aug 27 02:15:06.677: INFO: stderr: ""
Aug 27 02:15:06.677: INFO: stdout: "e2e-test-crd-publish-openapi-7797-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 27 02:15:06.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 create -f -'
Aug 27 02:15:07.425: INFO: rc: 1
Aug 27 02:15:07.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 apply -f -'
Aug 27 02:15:07.753: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 27 02:15:07.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 create -f -'
Aug 27 02:15:08.138: INFO: rc: 1
Aug 27 02:15:08.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9089 apply -f -'
Aug 27 02:15:09.151: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 27 02:15:09.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7797-crds'
Aug 27 02:15:09.843: INFO: stderr: ""
Aug 27 02:15:09.843: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7797-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 27 02:15:09.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7797-crds.metadata'
Aug 27 02:15:10.105: INFO: stderr: ""
Aug 27 02:15:10.105: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7797-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 27 02:15:10.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7797-crds.spec'
Aug 27 02:15:10.952: INFO: stderr: ""
Aug 27 02:15:10.952: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7797-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 27 02:15:10.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7797-crds.spec.bars'
Aug 27 02:15:11.416: INFO: stderr: ""
Aug 27 02:15:11.416: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7797-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 27 02:15:11.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7797-crds.spec.bars2'
Aug 27 02:15:12.155: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:15:15.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9089" for this suite.

• [SLOW TEST:19.431 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":230,"skipped":3819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:15:15.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 02:15:27.092: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:15:28.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2054" for this suite.

• [SLOW TEST:13.062 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3860,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:15:28.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 27 02:15:31.211: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:31.427: INFO: Number of nodes with available pods: 0
Aug 27 02:15:31.427: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:32.850: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:32.852: INFO: Number of nodes with available pods: 0
Aug 27 02:15:32.852: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:33.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:33.755: INFO: Number of nodes with available pods: 0
Aug 27 02:15:33.755: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:35.045: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:35.048: INFO: Number of nodes with available pods: 0
Aug 27 02:15:35.049: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:36.065: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:36.068: INFO: Number of nodes with available pods: 0
Aug 27 02:15:36.068: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:36.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:36.568: INFO: Number of nodes with available pods: 0
Aug 27 02:15:36.568: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:38.285: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:38.596: INFO: Number of nodes with available pods: 0
Aug 27 02:15:38.596: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:39.501: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:39.516: INFO: Number of nodes with available pods: 0
Aug 27 02:15:39.516: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:40.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:40.861: INFO: Number of nodes with available pods: 0
Aug 27 02:15:40.861: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:41.432: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:41.436: INFO: Number of nodes with available pods: 1
Aug 27 02:15:41.436: INFO: Node jerma-worker is running more than one daemon pod
Aug 27 02:15:42.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:42.497: INFO: Number of nodes with available pods: 2
Aug 27 02:15:42.497: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 27 02:15:42.523: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:42.525: INFO: Number of nodes with available pods: 1
Aug 27 02:15:42.525: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:43.824: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:43.827: INFO: Number of nodes with available pods: 1
Aug 27 02:15:43.827: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:44.698: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:44.701: INFO: Number of nodes with available pods: 1
Aug 27 02:15:44.701: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:45.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:45.606: INFO: Number of nodes with available pods: 1
Aug 27 02:15:45.606: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:47.021: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:47.025: INFO: Number of nodes with available pods: 1
Aug 27 02:15:47.025: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:47.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:47.750: INFO: Number of nodes with available pods: 1
Aug 27 02:15:47.750: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:48.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:48.534: INFO: Number of nodes with available pods: 1
Aug 27 02:15:48.534: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:49.818: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:49.821: INFO: Number of nodes with available pods: 1
Aug 27 02:15:49.821: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:50.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:50.841: INFO: Number of nodes with available pods: 1
Aug 27 02:15:50.841: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:51.578: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:51.581: INFO: Number of nodes with available pods: 1
Aug 27 02:15:51.581: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:52.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:52.594: INFO: Number of nodes with available pods: 1
Aug 27 02:15:52.594: INFO: Node jerma-worker2 is running more than one daemon pod
Aug 27 02:15:53.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 27 02:15:53.540: INFO: Number of nodes with available pods: 2
Aug 27 02:15:53.540: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5502, will wait for the garbage collector to delete the pods
Aug 27 02:15:53.601: INFO: Deleting DaemonSet.extensions daemon-set took: 5.609034ms
Aug 27 02:15:53.901: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.265527ms
Aug 27 02:16:01.905: INFO: Number of nodes with available pods: 0
Aug 27 02:16:01.905: INFO: Number of running nodes: 0, number of available pods: 0
Aug 27 02:16:01.907: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5502/daemonsets","resourceVersion":"4096619"},"items":null}

Aug 27 02:16:01.910: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5502/pods","resourceVersion":"4096619"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:16:01.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5502" for this suite.

• [SLOW TEST:33.328 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":232,"skipped":3872,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:16:01.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4008
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-4008
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4008
Aug 27 02:16:02.751: INFO: Found 0 stateful pods, waiting for 1
Aug 27 02:16:12.755: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 27 02:16:12.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:16:13.105: INFO: stderr: "I0827 02:16:12.900713    2854 log.go:172] (0xc00044adc0) (0xc0006c9a40) Create stream\nI0827 02:16:12.900965    2854 log.go:172] (0xc00044adc0) (0xc0006c9a40) Stream added, broadcasting: 1\nI0827 02:16:12.903717    2854 log.go:172] (0xc00044adc0) Reply frame received for 1\nI0827 02:16:12.903749    2854 log.go:172] (0xc00044adc0) (0xc0006c9c20) Create stream\nI0827 02:16:12.903758    2854 log.go:172] (0xc00044adc0) (0xc0006c9c20) Stream added, broadcasting: 3\nI0827 02:16:12.904868    2854 log.go:172] (0xc00044adc0) Reply frame received for 3\nI0827 02:16:12.904983    2854 log.go:172] (0xc00044adc0) (0xc000986000) Create stream\nI0827 02:16:12.905009    2854 log.go:172] (0xc00044adc0) (0xc000986000) Stream added, broadcasting: 5\nI0827 02:16:12.905973    2854 log.go:172] (0xc00044adc0) Reply frame received for 5\nI0827 02:16:12.967341    2854 log.go:172] (0xc00044adc0) Data frame received for 5\nI0827 02:16:12.967374    2854 log.go:172] (0xc000986000) (5) Data frame handling\nI0827 02:16:12.967398    2854 log.go:172] (0xc000986000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:16:13.091035    2854 log.go:172] (0xc00044adc0) Data frame received for 5\nI0827 02:16:13.091074    2854 log.go:172] (0xc000986000) (5) Data frame handling\nI0827 02:16:13.091104    2854 log.go:172] (0xc00044adc0) Data frame received for 3\nI0827 02:16:13.091119    2854 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0827 02:16:13.091129    2854 log.go:172] (0xc0006c9c20) (3) Data frame sent\nI0827 02:16:13.091266    2854 log.go:172] (0xc00044adc0) Data frame received for 3\nI0827 02:16:13.091294    2854 log.go:172] (0xc0006c9c20) (3) Data frame handling\nI0827 02:16:13.093287    2854 log.go:172] (0xc00044adc0) Data frame received for 1\nI0827 02:16:13.093326    2854 log.go:172] (0xc0006c9a40) (1) Data frame handling\nI0827 02:16:13.093366    2854 log.go:172] (0xc0006c9a40) (1) Data frame sent\nI0827 02:16:13.093390    2854 log.go:172] (0xc00044adc0) (0xc0006c9a40) Stream removed, broadcasting: 1\nI0827 02:16:13.093409    2854 log.go:172] (0xc00044adc0) Go away received\nI0827 02:16:13.093973    2854 log.go:172] (0xc00044adc0) (0xc0006c9a40) Stream removed, broadcasting: 1\nI0827 02:16:13.093997    2854 log.go:172] (0xc00044adc0) (0xc0006c9c20) Stream removed, broadcasting: 3\nI0827 02:16:13.094010    2854 log.go:172] (0xc00044adc0) (0xc000986000) Stream removed, broadcasting: 5\n"
Aug 27 02:16:13.105: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:16:13.105: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:16:13.285: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 27 02:16:23.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:16:23.289: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:16:23.687: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 27 02:16:23.687: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  }]
Aug 27 02:16:23.687: INFO: 
Aug 27 02:16:23.687: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 27 02:16:24.691: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.609828048s
Aug 27 02:16:25.696: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.606070255s
Aug 27 02:16:27.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.60145281s
Aug 27 02:16:28.202: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.099940527s
Aug 27 02:16:29.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.094823568s
Aug 27 02:16:30.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.586306042s
Aug 27 02:16:32.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.330622659s
Aug 27 02:16:33.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 281.48593ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4008
Aug 27 02:16:34.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:16:34.223: INFO: stderr: "I0827 02:16:34.141788    2878 log.go:172] (0xc0003da000) (0xc000a30140) Create stream\nI0827 02:16:34.141871    2878 log.go:172] (0xc0003da000) (0xc000a30140) Stream added, broadcasting: 1\nI0827 02:16:34.144448    2878 log.go:172] (0xc0003da000) Reply frame received for 1\nI0827 02:16:34.144506    2878 log.go:172] (0xc0003da000) (0xc000645e00) Create stream\nI0827 02:16:34.144531    2878 log.go:172] (0xc0003da000) (0xc000645e00) Stream added, broadcasting: 3\nI0827 02:16:34.145591    2878 log.go:172] (0xc0003da000) Reply frame received for 3\nI0827 02:16:34.145610    2878 log.go:172] (0xc0003da000) (0xc000645ea0) Create stream\nI0827 02:16:34.145615    2878 log.go:172] (0xc0003da000) (0xc000645ea0) Stream added, broadcasting: 5\nI0827 02:16:34.146381    2878 log.go:172] (0xc0003da000) Reply frame received for 5\nI0827 02:16:34.211581    2878 log.go:172] (0xc0003da000) Data frame received for 3\nI0827 02:16:34.211624    2878 log.go:172] (0xc000645e00) (3) Data frame handling\nI0827 02:16:34.211651    2878 log.go:172] (0xc000645e00) (3) Data frame sent\nI0827 02:16:34.211667    2878 log.go:172] (0xc0003da000) Data frame received for 3\nI0827 02:16:34.211689    2878 log.go:172] (0xc000645e00) (3) Data frame handling\nI0827 02:16:34.211752    2878 log.go:172] (0xc0003da000) Data frame received for 5\nI0827 02:16:34.211788    2878 log.go:172] (0xc000645ea0) (5) Data frame handling\nI0827 02:16:34.211809    2878 log.go:172] (0xc000645ea0) (5) Data frame sent\nI0827 02:16:34.211830    2878 log.go:172] (0xc0003da000) Data frame received for 5\nI0827 02:16:34.211847    2878 log.go:172] (0xc000645ea0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:16:34.213051    2878 log.go:172] (0xc0003da000) Data frame received for 1\nI0827 02:16:34.213068    2878 log.go:172] (0xc000a30140) (1) Data frame handling\nI0827 02:16:34.213077    2878 log.go:172] (0xc000a30140) (1) Data frame sent\nI0827 02:16:34.213276    2878 log.go:172] (0xc0003da000) (0xc000a30140) Stream removed, broadcasting: 1\nI0827 02:16:34.213438    2878 log.go:172] (0xc0003da000) Go away received\nI0827 02:16:34.213546    2878 log.go:172] (0xc0003da000) (0xc000a30140) Stream removed, broadcasting: 1\nI0827 02:16:34.213563    2878 log.go:172] (0xc0003da000) (0xc000645e00) Stream removed, broadcasting: 3\nI0827 02:16:34.213574    2878 log.go:172] (0xc0003da000) (0xc000645ea0) Stream removed, broadcasting: 5\n"
Aug 27 02:16:34.223: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:16:34.223: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:16:34.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:16:34.714: INFO: stderr: "I0827 02:16:34.646799    2898 log.go:172] (0xc00097c210) (0xc0008381e0) Create stream\nI0827 02:16:34.646853    2898 log.go:172] (0xc00097c210) (0xc0008381e0) Stream added, broadcasting: 1\nI0827 02:16:34.648587    2898 log.go:172] (0xc00097c210) Reply frame received for 1\nI0827 02:16:34.648640    2898 log.go:172] (0xc00097c210) (0xc00067fe00) Create stream\nI0827 02:16:34.648652    2898 log.go:172] (0xc00097c210) (0xc00067fe00) Stream added, broadcasting: 3\nI0827 02:16:34.649588    2898 log.go:172] (0xc00097c210) Reply frame received for 3\nI0827 02:16:34.649624    2898 log.go:172] (0xc00097c210) (0xc000839cc0) Create stream\nI0827 02:16:34.649644    2898 log.go:172] (0xc00097c210) (0xc000839cc0) Stream added, broadcasting: 5\nI0827 02:16:34.650377    2898 log.go:172] (0xc00097c210) Reply frame received for 5\nI0827 02:16:34.706565    2898 log.go:172] (0xc00097c210) Data frame received for 3\nI0827 02:16:34.706603    2898 log.go:172] (0xc00067fe00) (3) Data frame handling\nI0827 02:16:34.706615    2898 log.go:172] (0xc00067fe00) (3) Data frame sent\nI0827 02:16:34.706622    2898 log.go:172] (0xc00097c210) Data frame received for 3\nI0827 02:16:34.706628    2898 log.go:172] (0xc00067fe00) (3) Data frame handling\nI0827 02:16:34.706661    2898 log.go:172] (0xc00097c210) Data frame received for 5\nI0827 02:16:34.706672    2898 log.go:172] (0xc000839cc0) (5) Data frame handling\nI0827 02:16:34.706685    2898 log.go:172] (0xc000839cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0827 02:16:34.706783    2898 log.go:172] (0xc00097c210) Data frame received for 5\nI0827 02:16:34.706795    2898 log.go:172] (0xc000839cc0) (5) Data frame handling\nI0827 02:16:34.708146    2898 log.go:172] (0xc00097c210) Data frame received for 1\nI0827 02:16:34.708162    2898 log.go:172] (0xc0008381e0) (1) Data frame handling\nI0827 02:16:34.708172    2898 log.go:172] (0xc0008381e0) (1) Data frame sent\nI0827 02:16:34.708182    2898 log.go:172] (0xc00097c210) (0xc0008381e0) Stream removed, broadcasting: 1\nI0827 02:16:34.708204    2898 log.go:172] (0xc00097c210) Go away received\nI0827 02:16:34.708454    2898 log.go:172] (0xc00097c210) (0xc0008381e0) Stream removed, broadcasting: 1\nI0827 02:16:34.708465    2898 log.go:172] (0xc00097c210) (0xc00067fe00) Stream removed, broadcasting: 3\nI0827 02:16:34.708479    2898 log.go:172] (0xc00097c210) (0xc000839cc0) Stream removed, broadcasting: 5\n"
Aug 27 02:16:34.714: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:16:34.714: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:16:34.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:16:35.839: INFO: stderr: "I0827 02:16:35.016250    2918 log.go:172] (0xc000a469a0) (0xc000944000) Create stream\nI0827 02:16:35.016304    2918 log.go:172] (0xc000a469a0) (0xc000944000) Stream added, broadcasting: 1\nI0827 02:16:35.018844    2918 log.go:172] (0xc000a469a0) Reply frame received for 1\nI0827 02:16:35.018876    2918 log.go:172] (0xc000a469a0) (0xc0009440a0) Create stream\nI0827 02:16:35.018888    2918 log.go:172] (0xc000a469a0) (0xc0009440a0) Stream added, broadcasting: 3\nI0827 02:16:35.019531    2918 log.go:172] (0xc000a469a0) Reply frame received for 3\nI0827 02:16:35.019573    2918 log.go:172] (0xc000a469a0) (0xc0006819a0) Create stream\nI0827 02:16:35.019598    2918 log.go:172] (0xc000a469a0) (0xc0006819a0) Stream added, broadcasting: 5\nI0827 02:16:35.020368    2918 log.go:172] (0xc000a469a0) Reply frame received for 5\nI0827 02:16:35.074434    2918 log.go:172] (0xc000a469a0) Data frame received for 5\nI0827 02:16:35.074455    2918 log.go:172] (0xc0006819a0) (5) Data frame handling\nI0827 02:16:35.074467    2918 log.go:172] (0xc0006819a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:16:35.825337    2918 log.go:172] (0xc000a469a0) Data frame received for 3\nI0827 02:16:35.825376    2918 log.go:172] (0xc0009440a0) (3) Data frame handling\nI0827 02:16:35.825419    2918 log.go:172] (0xc0009440a0) (3) Data frame sent\nI0827 02:16:35.825469    2918 log.go:172] (0xc000a469a0) Data frame received for 5\nI0827 02:16:35.825495    2918 log.go:172] (0xc0006819a0) (5) Data frame handling\nI0827 02:16:35.825505    2918 log.go:172] (0xc0006819a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0827 02:16:35.826863    2918 log.go:172] (0xc000a469a0) Data frame received for 5\nI0827 02:16:35.826892    2918 log.go:172] (0xc0006819a0) (5) Data frame handling\nI0827 02:16:35.826905    2918 log.go:172] (0xc0006819a0) (5) Data frame sent\nI0827 02:16:35.826913    2918 log.go:172] (0xc000a469a0) Data frame received for 5\nI0827 02:16:35.826920    2918 log.go:172] (0xc0006819a0) (5) Data frame handling\n+ true\nI0827 02:16:35.826945    2918 log.go:172] (0xc000a469a0) Data frame received for 3\nI0827 02:16:35.826954    2918 log.go:172] (0xc0009440a0) (3) Data frame handling\nI0827 02:16:35.828580    2918 log.go:172] (0xc000a469a0) Data frame received for 1\nI0827 02:16:35.828605    2918 log.go:172] (0xc000944000) (1) Data frame handling\nI0827 02:16:35.828627    2918 log.go:172] (0xc000944000) (1) Data frame sent\nI0827 02:16:35.828643    2918 log.go:172] (0xc000a469a0) (0xc000944000) Stream removed, broadcasting: 1\nI0827 02:16:35.828659    2918 log.go:172] (0xc000a469a0) Go away received\nI0827 02:16:35.829123    2918 log.go:172] (0xc000a469a0) (0xc000944000) Stream removed, broadcasting: 1\nI0827 02:16:35.829146    2918 log.go:172] (0xc000a469a0) (0xc0009440a0) Stream removed, broadcasting: 3\nI0827 02:16:35.829156    2918 log.go:172] (0xc000a469a0) (0xc0006819a0) Stream removed, broadcasting: 5\n"
Aug 27 02:16:35.839: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:16:35.839: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:16:35.865: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:16:35.865: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:16:35.865: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 27 02:16:35.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:16:36.268: INFO: stderr: "I0827 02:16:35.996994    2940 log.go:172] (0xc0001046e0) (0xc00060fe00) Create stream\nI0827 02:16:35.997043    2940 log.go:172] (0xc0001046e0) (0xc00060fe00) Stream added, broadcasting: 1\nI0827 02:16:35.999232    2940 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0827 02:16:35.999267    2940 log.go:172] (0xc0001046e0) (0xc00060fea0) Create stream\nI0827 02:16:35.999280    2940 log.go:172] (0xc0001046e0) (0xc00060fea0) Stream added, broadcasting: 3\nI0827 02:16:36.000089    2940 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0827 02:16:36.000122    2940 log.go:172] (0xc0001046e0) (0xc00050e6e0) Create stream\nI0827 02:16:36.000142    2940 log.go:172] (0xc0001046e0) (0xc00050e6e0) Stream added, broadcasting: 5\nI0827 02:16:36.001112    2940 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0827 02:16:36.069084    2940 log.go:172] (0xc0001046e0) Data frame received for 5\nI0827 02:16:36.069102    2940 log.go:172] (0xc00050e6e0) (5) Data frame handling\nI0827 02:16:36.069112    2940 log.go:172] (0xc00050e6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:16:36.254954    2940 log.go:172] (0xc0001046e0) Data frame received for 3\nI0827 02:16:36.255259    2940 log.go:172] (0xc00060fea0) (3) Data frame handling\nI0827 02:16:36.255287    2940 log.go:172] (0xc00060fea0) (3) Data frame sent\nI0827 02:16:36.255300    2940 log.go:172] (0xc0001046e0) Data frame received for 3\nI0827 02:16:36.255316    2940 log.go:172] (0xc00060fea0) (3) Data frame handling\nI0827 02:16:36.255333    2940 log.go:172] (0xc0001046e0) Data frame received for 5\nI0827 02:16:36.255353    2940 log.go:172] (0xc00050e6e0) (5) Data frame handling\nI0827 02:16:36.256711    2940 log.go:172] (0xc0001046e0) Data frame received for 1\nI0827 02:16:36.256820    2940 log.go:172] (0xc00060fe00) (1) Data frame handling\nI0827 02:16:36.256835    2940 log.go:172] (0xc00060fe00) (1) Data frame sent\nI0827 02:16:36.256850    2940 log.go:172] (0xc0001046e0) (0xc00060fe00) Stream removed, broadcasting: 1\nI0827 02:16:36.256865    2940 log.go:172] (0xc0001046e0) Go away received\nI0827 02:16:36.257243    2940 log.go:172] (0xc0001046e0) (0xc00060fe00) Stream removed, broadcasting: 1\nI0827 02:16:36.257266    2940 log.go:172] (0xc0001046e0) (0xc00060fea0) Stream removed, broadcasting: 3\nI0827 02:16:36.257273    2940 log.go:172] (0xc0001046e0) (0xc00050e6e0) Stream removed, broadcasting: 5\n"
Aug 27 02:16:36.268: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:16:36.268: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:16:36.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:16:36.939: INFO: stderr: "I0827 02:16:36.394681    2960 log.go:172] (0xc000a5ac60) (0xc000b643c0) Create stream\nI0827 02:16:36.394734    2960 log.go:172] (0xc000a5ac60) (0xc000b643c0) Stream added, broadcasting: 1\nI0827 02:16:36.396618    2960 log.go:172] (0xc000a5ac60) Reply frame received for 1\nI0827 02:16:36.396673    2960 log.go:172] (0xc000a5ac60) (0xc000b64460) Create stream\nI0827 02:16:36.396687    2960 log.go:172] (0xc000a5ac60) (0xc000b64460) Stream added, broadcasting: 3\nI0827 02:16:36.397630    2960 log.go:172] (0xc000a5ac60) Reply frame received for 3\nI0827 02:16:36.397664    2960 log.go:172] (0xc000a5ac60) (0xc00051fae0) Create stream\nI0827 02:16:36.397676    2960 log.go:172] (0xc000a5ac60) (0xc00051fae0) Stream added, broadcasting: 5\nI0827 02:16:36.398492    2960 log.go:172] (0xc000a5ac60) Reply frame received for 5\nI0827 02:16:36.468269    2960 log.go:172] (0xc000a5ac60) Data frame received for 5\nI0827 02:16:36.468300    2960 log.go:172] (0xc00051fae0) (5) Data frame handling\nI0827 02:16:36.468319    2960 log.go:172] (0xc00051fae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:16:36.929653    2960 log.go:172] (0xc000a5ac60) Data frame received for 3\nI0827 02:16:36.929692    2960 log.go:172] (0xc000b64460) (3) Data frame handling\nI0827 02:16:36.929722    2960 log.go:172] (0xc000b64460) (3) Data frame sent\nI0827 02:16:36.930101    2960 log.go:172] (0xc000a5ac60) Data frame received for 5\nI0827 02:16:36.930204    2960 log.go:172] (0xc00051fae0) (5) Data frame handling\nI0827 02:16:36.930244    2960 log.go:172] (0xc000a5ac60) Data frame received for 3\nI0827 02:16:36.930266    2960 log.go:172] (0xc000b64460) (3) Data frame handling\nI0827 02:16:36.931727    2960 log.go:172] (0xc000a5ac60) Data frame received for 1\nI0827 02:16:36.931756    2960 log.go:172] (0xc000b643c0) (1) Data frame handling\nI0827 02:16:36.931771    2960 log.go:172] (0xc000b643c0) (1) Data frame sent\nI0827 02:16:36.931786    2960 log.go:172] (0xc000a5ac60) (0xc000b643c0) Stream removed, broadcasting: 1\nI0827 02:16:36.932031    2960 log.go:172] (0xc000a5ac60) Go away received\nI0827 02:16:36.932181    2960 log.go:172] (0xc000a5ac60) (0xc000b643c0) Stream removed, broadcasting: 1\nI0827 02:16:36.932205    2960 log.go:172] (0xc000a5ac60) (0xc000b64460) Stream removed, broadcasting: 3\nI0827 02:16:36.932215    2960 log.go:172] (0xc000a5ac60) (0xc00051fae0) Stream removed, broadcasting: 5\n"
Aug 27 02:16:36.939: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:16:36.939: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:16:36.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:16:37.993: INFO: stderr: "I0827 02:16:37.797537    2980 log.go:172] (0xc0001053f0) (0xc000663ae0) Create stream\nI0827 02:16:37.797596    2980 log.go:172] (0xc0001053f0) (0xc000663ae0) Stream added, broadcasting: 1\nI0827 02:16:37.800204    2980 log.go:172] (0xc0001053f0) Reply frame received for 1\nI0827 02:16:37.800242    2980 log.go:172] (0xc0001053f0) (0xc000663cc0) Create stream\nI0827 02:16:37.800250    2980 log.go:172] (0xc0001053f0) (0xc000663cc0) Stream added, broadcasting: 3\nI0827 02:16:37.801215    2980 log.go:172] (0xc0001053f0) Reply frame received for 3\nI0827 02:16:37.801241    2980 log.go:172] (0xc0001053f0) (0xc000952000) Create stream\nI0827 02:16:37.801248    2980 log.go:172] (0xc0001053f0) (0xc000952000) Stream added, broadcasting: 5\nI0827 02:16:37.801990    2980 log.go:172] (0xc0001053f0) Reply frame received for 5\nI0827 02:16:37.867317    2980 log.go:172] (0xc0001053f0) Data frame received for 5\nI0827 02:16:37.867344    2980 log.go:172] (0xc000952000) (5) Data frame handling\nI0827 02:16:37.867359    2980 log.go:172] (0xc000952000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:16:37.975469    2980 log.go:172] (0xc0001053f0) Data frame received for 3\nI0827 02:16:37.975511    2980 log.go:172] (0xc000663cc0) (3) Data frame handling\nI0827 02:16:37.975532    2980 log.go:172] (0xc000663cc0) (3) Data frame sent\nI0827 02:16:37.975548    2980 log.go:172] (0xc0001053f0) Data frame received for 3\nI0827 02:16:37.975562    2980 log.go:172] (0xc000663cc0) (3) Data frame handling\nI0827 02:16:37.975601    2980 log.go:172] (0xc0001053f0) Data frame received for 5\nI0827 02:16:37.975625    2980 log.go:172] (0xc000952000) (5) Data frame handling\nI0827 02:16:37.980048    2980 log.go:172] (0xc0001053f0) Data frame received for 1\nI0827 02:16:37.980156    2980 log.go:172] (0xc000663ae0) (1) Data frame handling\nI0827 02:16:37.980238    2980 log.go:172] (0xc000663ae0) (1) Data frame sent\nI0827 02:16:37.980342    2980 log.go:172] (0xc0001053f0) (0xc000663ae0) Stream removed, broadcasting: 1\nI0827 02:16:37.980380    2980 log.go:172] (0xc0001053f0) Go away received\nI0827 02:16:37.980972    2980 log.go:172] (0xc0001053f0) (0xc000663ae0) Stream removed, broadcasting: 1\nI0827 02:16:37.981001    2980 log.go:172] (0xc0001053f0) (0xc000663cc0) Stream removed, broadcasting: 3\nI0827 02:16:37.981016    2980 log.go:172] (0xc0001053f0) (0xc000952000) Stream removed, broadcasting: 5\n"
Aug 27 02:16:37.993: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:16:37.993: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:16:37.993: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:16:38.024: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 27 02:16:48.086: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:16:48.086: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:16:48.086: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:16:48.104: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:48.104: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  }]
Aug 27 02:16:48.104: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:48.104: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:48.104: INFO: 
Aug 27 02:16:48.104: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 02:16:49.470: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:49.470: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  }]
Aug 27 02:16:49.470: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:49.470: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:49.470: INFO: 
Aug 27 02:16:49.470: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 02:16:50.628: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:50.628: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  }]
Aug 27 02:16:50.628: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:50.629: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:50.629: INFO: 
Aug 27 02:16:50.629: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 02:16:51.638: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:51.638: INFO: ss-0  jerma-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:02 +0000 UTC  }]
Aug 27 02:16:51.638: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:51.638: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:51.638: INFO: 
Aug 27 02:16:51.638: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 27 02:16:52.641: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:52.641: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:52.641: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:52.641: INFO: 
Aug 27 02:16:52.641: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 27 02:16:53.646: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:53.646: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:53.646: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:53.646: INFO: 
Aug 27 02:16:53.646: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 27 02:16:54.651: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:54.651: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:54.651: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:54.651: INFO: 
Aug 27 02:16:54.651: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 27 02:16:55.657: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:55.657: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:55.657: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:55.657: INFO: 
Aug 27 02:16:55.657: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 27 02:16:56.661: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:56.661: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:56.661: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:56.661: INFO: 
Aug 27 02:16:56.661: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 27 02:16:57.665: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 27 02:16:57.665: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:57.665: INFO: ss-2  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-27 02:16:23 +0000 UTC  }]
Aug 27 02:16:57.665: INFO: 
Aug 27 02:16:57.665: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4008
Aug 27 02:16:59.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:16:59.401: INFO: rc: 1
Aug 27 02:16:59.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 27 02:17:09.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:17:09.527: INFO: rc: 1
Aug 27 02:17:09.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:17:19.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:17:19.610: INFO: rc: 1
Aug 27 02:17:19.610: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:17:29.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:17:29.707: INFO: rc: 1
Aug 27 02:17:29.707: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:17:39.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:17:39.801: INFO: rc: 1
Aug 27 02:17:39.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:17:49.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:17:50.110: INFO: rc: 1
Aug 27 02:17:50.110: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:00.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:00.193: INFO: rc: 1
Aug 27 02:18:00.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:10.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:10.290: INFO: rc: 1
Aug 27 02:18:10.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:20.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:21.207: INFO: rc: 1
Aug 27 02:18:21.207: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:31.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:31.306: INFO: rc: 1
Aug 27 02:18:31.306: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:41.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:41.394: INFO: rc: 1
Aug 27 02:18:41.394: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:18:51.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:18:51.496: INFO: rc: 1
Aug 27 02:18:51.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:01.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:01.586: INFO: rc: 1
Aug 27 02:19:01.586: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:11.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:11.682: INFO: rc: 1
Aug 27 02:19:11.682: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:21.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:21.785: INFO: rc: 1
Aug 27 02:19:21.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:31.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:31.893: INFO: rc: 1
Aug 27 02:19:31.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:41.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:41.990: INFO: rc: 1
Aug 27 02:19:41.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:19:51.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:19:52.407: INFO: rc: 1
Aug 27 02:19:52.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:02.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:02.512: INFO: rc: 1
Aug 27 02:20:02.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:12.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:12.601: INFO: rc: 1
Aug 27 02:20:12.601: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:22.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:22.871: INFO: rc: 1
Aug 27 02:20:22.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:32.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:33.075: INFO: rc: 1
Aug 27 02:20:33.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:43.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:43.191: INFO: rc: 1
Aug 27 02:20:43.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:20:53.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:20:53.312: INFO: rc: 1
Aug 27 02:20:53.312: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:03.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:03.391: INFO: rc: 1
Aug 27 02:21:03.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:13.481: INFO: rc: 1
Aug 27 02:21:13.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:23.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:23.598: INFO: rc: 1
Aug 27 02:21:23.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:33.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:33.713: INFO: rc: 1
Aug 27 02:21:33.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:43.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:43.822: INFO: rc: 1
Aug 27 02:21:43.822: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:21:53.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:21:53.963: INFO: rc: 1
Aug 27 02:21:53.963: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 27 02:22:03.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:22:04.072: INFO: rc: 1
Aug 27 02:22:04.072: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Aug 27 02:22:04.072: INFO: Scaling statefulset ss to 0
Aug 27 02:22:04.080: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 02:22:04.082: INFO: Deleting all statefulset in ns statefulset-4008
Aug 27 02:22:04.083: INFO: Scaling statefulset ss to 0
Aug 27 02:22:04.090: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:22:04.091: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:22:04.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4008" for this suite.

• [SLOW TEST:362.197 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":233,"skipped":3902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:22:04.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:22:06.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:08.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091726, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:11.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091726, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:12.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091726, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:14.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091726, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:17.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091726, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091725, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:22:20.052: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:22:20.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1042-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:22:23.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1021" for this suite.
STEP: Destroying namespace "webhook-1021-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.237 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":234,"skipped":3953,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:22:25.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:22:27.520: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 27 02:22:29.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091748, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:22:31.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091748, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734091747, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:22:35.625: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:22:36.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:22:39.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-707" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.784 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":235,"skipped":3969,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:22:40.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:22:40.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 27 02:22:44.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 create -f -'
Aug 27 02:23:07.922: INFO: stderr: ""
Aug 27 02:23:07.922: INFO: stdout: "e2e-test-crd-publish-openapi-7643-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 27 02:23:07.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-7643-crds test-cr'
Aug 27 02:23:08.093: INFO: stderr: ""
Aug 27 02:23:08.093: INFO: stdout: "e2e-test-crd-publish-openapi-7643-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 27 02:23:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 apply -f -'
Aug 27 02:23:08.439: INFO: stderr: ""
Aug 27 02:23:08.439: INFO: stdout: "e2e-test-crd-publish-openapi-7643-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 27 02:23:08.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-7643-crds test-cr'
Aug 27 02:23:08.792: INFO: stderr: ""
Aug 27 02:23:08.792: INFO: stdout: "e2e-test-crd-publish-openapi-7643-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 27 02:23:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7643-crds'
Aug 27 02:23:09.177: INFO: stderr: ""
Aug 27 02:23:09.178: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7643-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:23:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8188" for this suite.

• [SLOW TEST:31.469 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":236,"skipped":3997,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:23:11.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5513
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5513
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5513
Aug 27 02:23:11.809: INFO: Found 0 stateful pods, waiting for 1
Aug 27 02:23:21.813: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 27 02:23:21.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:23:22.366: INFO: stderr: "I0827 02:23:22.143347    3710 log.go:172] (0xc00011cc60) (0xc0007a21e0) Create stream\nI0827 02:23:22.143415    3710 log.go:172] (0xc00011cc60) (0xc0007a21e0) Stream added, broadcasting: 1\nI0827 02:23:22.146388    3710 log.go:172] (0xc00011cc60) Reply frame received for 1\nI0827 02:23:22.146435    3710 log.go:172] (0xc00011cc60) (0xc000607ae0) Create stream\nI0827 02:23:22.146454    3710 log.go:172] (0xc00011cc60) (0xc000607ae0) Stream added, broadcasting: 3\nI0827 02:23:22.147617    3710 log.go:172] (0xc00011cc60) Reply frame received for 3\nI0827 02:23:22.147662    3710 log.go:172] (0xc00011cc60) (0xc0005dd0e0) Create stream\nI0827 02:23:22.147678    3710 log.go:172] (0xc00011cc60) (0xc0005dd0e0) Stream added, broadcasting: 5\nI0827 02:23:22.149120    3710 log.go:172] (0xc00011cc60) Reply frame received for 5\nI0827 02:23:22.217751    3710 log.go:172] (0xc00011cc60) Data frame received for 5\nI0827 02:23:22.217784    3710 log.go:172] (0xc0005dd0e0) (5) Data frame handling\nI0827 02:23:22.217807    3710 log.go:172] (0xc0005dd0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:23:22.354619    3710 log.go:172] (0xc00011cc60) Data frame received for 3\nI0827 02:23:22.354654    3710 log.go:172] (0xc000607ae0) (3) Data frame handling\nI0827 02:23:22.354669    3710 log.go:172] (0xc000607ae0) (3) Data frame sent\nI0827 02:23:22.355043    3710 log.go:172] (0xc00011cc60) Data frame received for 5\nI0827 02:23:22.355063    3710 log.go:172] (0xc0005dd0e0) (5) Data frame handling\nI0827 02:23:22.355083    3710 log.go:172] (0xc00011cc60) Data frame received for 3\nI0827 02:23:22.355102    3710 log.go:172] (0xc000607ae0) (3) Data frame handling\nI0827 02:23:22.357322    3710 log.go:172] (0xc00011cc60) Data frame received for 1\nI0827 02:23:22.357386    3710 log.go:172] (0xc0007a21e0) (1) Data frame handling\nI0827 02:23:22.357408    3710 log.go:172] (0xc0007a21e0) (1) Data frame sent\nI0827 02:23:22.357428    3710 log.go:172] (0xc00011cc60) (0xc0007a21e0) Stream removed, broadcasting: 1\nI0827 02:23:22.357454    3710 log.go:172] (0xc00011cc60) Go away received\nI0827 02:23:22.357779    3710 log.go:172] (0xc00011cc60) (0xc0007a21e0) Stream removed, broadcasting: 1\nI0827 02:23:22.357793    3710 log.go:172] (0xc00011cc60) (0xc000607ae0) Stream removed, broadcasting: 3\nI0827 02:23:22.357800    3710 log.go:172] (0xc00011cc60) (0xc0005dd0e0) Stream removed, broadcasting: 5\n"
Aug 27 02:23:22.366: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:23:22.366: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:23:22.372: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 27 02:23:32.376: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:23:32.376: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:23:32.398: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999779s
Aug 27 02:23:33.402: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986114856s
Aug 27 02:23:34.410: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982877477s
Aug 27 02:23:35.414: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.97416702s
Aug 27 02:23:36.417: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.97060292s
Aug 27 02:23:37.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967374122s
Aug 27 02:23:38.424: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964372479s
Aug 27 02:23:39.426: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.960930433s
Aug 27 02:23:40.436: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958435489s
Aug 27 02:23:41.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 948.532183ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5513
Aug 27 02:23:42.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:23:42.645: INFO: stderr: "I0827 02:23:42.580369    3731 log.go:172] (0xc000528dc0) (0xc0005a4000) Create stream\nI0827 02:23:42.580418    3731 log.go:172] (0xc000528dc0) (0xc0005a4000) Stream added, broadcasting: 1\nI0827 02:23:42.582447    3731 log.go:172] (0xc000528dc0) Reply frame received for 1\nI0827 02:23:42.582480    3731 log.go:172] (0xc000528dc0) (0xc0005f1b80) Create stream\nI0827 02:23:42.582489    3731 log.go:172] (0xc000528dc0) (0xc0005f1b80) Stream added, broadcasting: 3\nI0827 02:23:42.583168    3731 log.go:172] (0xc000528dc0) Reply frame received for 3\nI0827 02:23:42.583191    3731 log.go:172] (0xc000528dc0) (0xc0000c4000) Create stream\nI0827 02:23:42.583197    3731 log.go:172] (0xc000528dc0) (0xc0000c4000) Stream added, broadcasting: 5\nI0827 02:23:42.583844    3731 log.go:172] (0xc000528dc0) Reply frame received for 5\nI0827 02:23:42.638830    3731 log.go:172] (0xc000528dc0) Data frame received for 3\nI0827 02:23:42.638850    3731 log.go:172] (0xc0005f1b80) (3) Data frame handling\nI0827 02:23:42.638856    3731 log.go:172] (0xc0005f1b80) (3) Data frame sent\nI0827 02:23:42.638860    3731 log.go:172] (0xc000528dc0) Data frame received for 3\nI0827 02:23:42.638866    3731 log.go:172] (0xc0005f1b80) (3) Data frame handling\nI0827 02:23:42.638882    3731 log.go:172] (0xc000528dc0) Data frame received for 5\nI0827 02:23:42.638886    3731 log.go:172] (0xc0000c4000) (5) Data frame handling\nI0827 02:23:42.638891    3731 log.go:172] (0xc0000c4000) (5) Data frame sent\nI0827 02:23:42.638895    3731 log.go:172] (0xc000528dc0) Data frame received for 5\nI0827 02:23:42.638899    3731 log.go:172] (0xc0000c4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:23:42.639648    3731 log.go:172] (0xc000528dc0) Data frame received for 1\nI0827 02:23:42.639681    3731 log.go:172] (0xc0005a4000) (1) Data frame handling\nI0827 02:23:42.639690    3731 log.go:172] (0xc0005a4000) (1) Data frame sent\nI0827 02:23:42.639706    3731 log.go:172] (0xc000528dc0) (0xc0005a4000) Stream removed, broadcasting: 1\nI0827 02:23:42.639719    3731 log.go:172] (0xc000528dc0) Go away received\nI0827 02:23:42.639999    3731 log.go:172] (0xc000528dc0) (0xc0005a4000) Stream removed, broadcasting: 1\nI0827 02:23:42.640012    3731 log.go:172] (0xc000528dc0) (0xc0005f1b80) Stream removed, broadcasting: 3\nI0827 02:23:42.640017    3731 log.go:172] (0xc000528dc0) (0xc0000c4000) Stream removed, broadcasting: 5\n"
Aug 27 02:23:42.645: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:23:42.645: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:23:42.647: INFO: Found 1 stateful pods, waiting for 3
Aug 27 02:23:52.652: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:23:52.652: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:23:52.652: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 02:24:02.651: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:24:02.651: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:24:02.651: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 27 02:24:02.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:24:02.842: INFO: stderr: "I0827 02:24:02.781839    3753 log.go:172] (0xc0006346e0) (0xc000618000) Create stream\nI0827 02:24:02.781887    3753 log.go:172] (0xc0006346e0) (0xc000618000) Stream added, broadcasting: 1\nI0827 02:24:02.783512    3753 log.go:172] (0xc0006346e0) Reply frame received for 1\nI0827 02:24:02.783544    3753 log.go:172] (0xc0006346e0) (0xc0005f6000) Create stream\nI0827 02:24:02.783555    3753 log.go:172] (0xc0006346e0) (0xc0005f6000) Stream added, broadcasting: 3\nI0827 02:24:02.784315    3753 log.go:172] (0xc0006346e0) Reply frame received for 3\nI0827 02:24:02.784370    3753 log.go:172] (0xc0006346e0) (0xc000607a40) Create stream\nI0827 02:24:02.784389    3753 log.go:172] (0xc0006346e0) (0xc000607a40) Stream added, broadcasting: 5\nI0827 02:24:02.785287    3753 log.go:172] (0xc0006346e0) Reply frame received for 5\nI0827 02:24:02.836009    3753 log.go:172] (0xc0006346e0) Data frame received for 3\nI0827 02:24:02.836032    3753 log.go:172] (0xc0005f6000) (3) Data frame handling\nI0827 02:24:02.836053    3753 log.go:172] (0xc0006346e0) Data frame received for 5\nI0827 02:24:02.836088    3753 log.go:172] (0xc000607a40) (5) Data frame handling\nI0827 02:24:02.836103    3753 log.go:172] (0xc000607a40) (5) Data frame sent\nI0827 02:24:02.836114    3753 log.go:172] (0xc0006346e0) Data frame received for 5\nI0827 02:24:02.836124    3753 log.go:172] (0xc000607a40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:24:02.836146    3753 log.go:172] (0xc0005f6000) (3) Data frame sent\nI0827 02:24:02.836176    3753 log.go:172] (0xc0006346e0) Data frame received for 3\nI0827 02:24:02.836189    3753 log.go:172] (0xc0005f6000) (3) Data frame handling\nI0827 02:24:02.837300    3753 log.go:172] (0xc0006346e0) Data frame received for 1\nI0827 02:24:02.837324    3753 log.go:172] (0xc000618000) (1) Data frame handling\nI0827 02:24:02.837350    3753 log.go:172] (0xc000618000) (1) Data frame sent\nI0827 02:24:02.837366    3753 log.go:172] (0xc0006346e0) (0xc000618000) Stream removed, broadcasting: 1\nI0827 02:24:02.837458    3753 log.go:172] (0xc0006346e0) Go away received\nI0827 02:24:02.837702    3753 log.go:172] (0xc0006346e0) (0xc000618000) Stream removed, broadcasting: 1\nI0827 02:24:02.837718    3753 log.go:172] (0xc0006346e0) (0xc0005f6000) Stream removed, broadcasting: 3\nI0827 02:24:02.837730    3753 log.go:172] (0xc0006346e0) (0xc000607a40) Stream removed, broadcasting: 5\n"
Aug 27 02:24:02.842: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:24:02.842: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:24:02.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:24:03.503: INFO: stderr: "I0827 02:24:03.109097    3772 log.go:172] (0xc000216dc0) (0xc000623ae0) Create stream\nI0827 02:24:03.109171    3772 log.go:172] (0xc000216dc0) (0xc000623ae0) Stream added, broadcasting: 1\nI0827 02:24:03.120933    3772 log.go:172] (0xc000216dc0) Reply frame received for 1\nI0827 02:24:03.120984    3772 log.go:172] (0xc000216dc0) (0xc00084e000) Create stream\nI0827 02:24:03.120997    3772 log.go:172] (0xc000216dc0) (0xc00084e000) Stream added, broadcasting: 3\nI0827 02:24:03.121926    3772 log.go:172] (0xc000216dc0) Reply frame received for 3\nI0827 02:24:03.121962    3772 log.go:172] (0xc000216dc0) (0xc000623cc0) Create stream\nI0827 02:24:03.121973    3772 log.go:172] (0xc000216dc0) (0xc000623cc0) Stream added, broadcasting: 5\nI0827 02:24:03.122815    3772 log.go:172] (0xc000216dc0) Reply frame received for 5\nI0827 02:24:03.168374    3772 log.go:172] (0xc000216dc0) Data frame received for 5\nI0827 02:24:03.168404    3772 log.go:172] (0xc000623cc0) (5) Data frame handling\nI0827 02:24:03.168415    3772 log.go:172] (0xc000623cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:24:03.491719    3772 log.go:172] (0xc000216dc0) Data frame received for 3\nI0827 02:24:03.491746    3772 log.go:172] (0xc00084e000) (3) Data frame handling\nI0827 02:24:03.491761    3772 log.go:172] (0xc00084e000) (3) Data frame sent\nI0827 02:24:03.491769    3772 log.go:172] (0xc000216dc0) Data frame received for 3\nI0827 02:24:03.491775    3772 log.go:172] (0xc00084e000) (3) Data frame handling\nI0827 02:24:03.492019    3772 log.go:172] (0xc000216dc0) Data frame received for 5\nI0827 02:24:03.492036    3772 log.go:172] (0xc000623cc0) (5) Data frame handling\nI0827 02:24:03.493384    3772 log.go:172] (0xc000216dc0) Data frame received for 1\nI0827 02:24:03.493399    3772 log.go:172] (0xc000623ae0) (1) Data frame handling\nI0827 02:24:03.493406    3772 log.go:172] (0xc000623ae0) (1) Data frame sent\nI0827 02:24:03.493415    3772 log.go:172] (0xc000216dc0) (0xc000623ae0) Stream removed, broadcasting: 1\nI0827 02:24:03.493421    3772 log.go:172] (0xc000216dc0) Go away received\nI0827 02:24:03.493847    3772 log.go:172] (0xc000216dc0) (0xc000623ae0) Stream removed, broadcasting: 1\nI0827 02:24:03.493871    3772 log.go:172] (0xc000216dc0) (0xc00084e000) Stream removed, broadcasting: 3\nI0827 02:24:03.493883    3772 log.go:172] (0xc000216dc0) (0xc000623cc0) Stream removed, broadcasting: 5\n"
Aug 27 02:24:03.503: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:24:03.503: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:24:03.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 27 02:24:04.321: INFO: stderr: "I0827 02:24:04.019553    3793 log.go:172] (0xc0008ef3f0) (0xc0008e6000) Create stream\nI0827 02:24:04.019612    3793 log.go:172] (0xc0008ef3f0) (0xc0008e6000) Stream added, broadcasting: 1\nI0827 02:24:04.021967    3793 log.go:172] (0xc0008ef3f0) Reply frame received for 1\nI0827 02:24:04.021999    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca0a0) Create stream\nI0827 02:24:04.022008    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca0a0) Stream added, broadcasting: 3\nI0827 02:24:04.022694    3793 log.go:172] (0xc0008ef3f0) Reply frame received for 3\nI0827 02:24:04.022719    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca140) Create stream\nI0827 02:24:04.022727    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca140) Stream added, broadcasting: 5\nI0827 02:24:04.023370    3793 log.go:172] (0xc0008ef3f0) Reply frame received for 5\nI0827 02:24:04.090538    3793 log.go:172] (0xc0008ef3f0) Data frame received for 5\nI0827 02:24:04.090556    3793 log.go:172] (0xc0008ca140) (5) Data frame handling\nI0827 02:24:04.090568    3793 log.go:172] (0xc0008ca140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0827 02:24:04.309466    3793 log.go:172] (0xc0008ef3f0) Data frame received for 3\nI0827 02:24:04.309505    3793 log.go:172] (0xc0008ca0a0) (3) Data frame handling\nI0827 02:24:04.309535    3793 log.go:172] (0xc0008ca0a0) (3) Data frame sent\nI0827 02:24:04.309550    3793 log.go:172] (0xc0008ef3f0) Data frame received for 3\nI0827 02:24:04.309562    3793 log.go:172] (0xc0008ca0a0) (3) Data frame handling\nI0827 02:24:04.309987    3793 log.go:172] (0xc0008ef3f0) Data frame received for 5\nI0827 02:24:04.310007    3793 log.go:172] (0xc0008ca140) (5) Data frame handling\nI0827 02:24:04.311703    3793 log.go:172] (0xc0008ef3f0) Data frame received for 1\nI0827 02:24:04.311725    3793 log.go:172] (0xc0008e6000) (1) Data frame handling\nI0827 02:24:04.311746    3793 log.go:172] (0xc0008e6000) (1) Data frame sent\nI0827 02:24:04.311866    3793 log.go:172] (0xc0008ef3f0) (0xc0008e6000) Stream removed, broadcasting: 1\nI0827 02:24:04.312078    3793 log.go:172] (0xc0008ef3f0) Go away received\nI0827 02:24:04.312269    3793 log.go:172] (0xc0008ef3f0) (0xc0008e6000) Stream removed, broadcasting: 1\nI0827 02:24:04.312321    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca0a0) Stream removed, broadcasting: 3\nI0827 02:24:04.312340    3793 log.go:172] (0xc0008ef3f0) (0xc0008ca140) Stream removed, broadcasting: 5\n"
Aug 27 02:24:04.321: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 27 02:24:04.321: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 27 02:24:04.321: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:24:04.324: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 27 02:24:14.571: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:24:14.572: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:24:14.572: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 27 02:24:14.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999176s
Aug 27 02:24:15.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.885390856s
Aug 27 02:24:16.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.880854193s
Aug 27 02:24:17.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.847182364s
Aug 27 02:24:18.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.843557157s
Aug 27 02:24:19.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.840647027s
Aug 27 02:24:20.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.837285432s
Aug 27 02:24:21.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.819542253s
Aug 27 02:24:22.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.81525099s
Aug 27 02:24:24.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 811.64174ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5513
Aug 27 02:24:25.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:24:25.764: INFO: stderr: "I0827 02:24:25.692493    3804 log.go:172] (0xc000a18a50) (0xc000a9e6e0) Create stream\nI0827 02:24:25.692556    3804 log.go:172] (0xc000a18a50) (0xc000a9e6e0) Stream added, broadcasting: 1\nI0827 02:24:25.694536    3804 log.go:172] (0xc000a18a50) Reply frame received for 1\nI0827 02:24:25.694568    3804 log.go:172] (0xc000a18a50) (0xc000a760a0) Create stream\nI0827 02:24:25.694576    3804 log.go:172] (0xc000a18a50) (0xc000a760a0) Stream added, broadcasting: 3\nI0827 02:24:25.695487    3804 log.go:172] (0xc000a18a50) Reply frame received for 3\nI0827 02:24:25.695519    3804 log.go:172] (0xc000a18a50) (0xc000a9e780) Create stream\nI0827 02:24:25.695530    3804 log.go:172] (0xc000a18a50) (0xc000a9e780) Stream added, broadcasting: 5\nI0827 02:24:25.696417    3804 log.go:172] (0xc000a18a50) Reply frame received for 5\nI0827 02:24:25.755496    3804 log.go:172] (0xc000a18a50) Data frame received for 5\nI0827 02:24:25.755535    3804 log.go:172] (0xc000a9e780) (5) Data frame handling\nI0827 02:24:25.755545    3804 log.go:172] (0xc000a9e780) (5) Data frame sent\nI0827 02:24:25.755550    3804 log.go:172] (0xc000a18a50) Data frame received for 5\nI0827 02:24:25.755554    3804 log.go:172] (0xc000a9e780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:24:25.755569    3804 log.go:172] (0xc000a18a50) Data frame received for 3\nI0827 02:24:25.755573    3804 log.go:172] (0xc000a760a0) (3) Data frame handling\nI0827 02:24:25.755578    3804 log.go:172] (0xc000a760a0) (3) Data frame sent\nI0827 02:24:25.755590    3804 log.go:172] (0xc000a18a50) Data frame received for 3\nI0827 02:24:25.755603    3804 log.go:172] (0xc000a760a0) (3) Data frame handling\nI0827 02:24:25.756521    3804 log.go:172] (0xc000a18a50) Data frame received for 1\nI0827 02:24:25.756538    3804 log.go:172] (0xc000a9e6e0) (1) Data frame handling\nI0827 02:24:25.756549    3804 log.go:172] (0xc000a9e6e0) (1) Data frame sent\nI0827 02:24:25.756568    3804 log.go:172] (0xc000a18a50) (0xc000a9e6e0) Stream removed, broadcasting: 1\nI0827 02:24:25.756585    3804 log.go:172] (0xc000a18a50) Go away received\nI0827 02:24:25.756896    3804 log.go:172] (0xc000a18a50) (0xc000a9e6e0) Stream removed, broadcasting: 1\nI0827 02:24:25.756917    3804 log.go:172] (0xc000a18a50) (0xc000a760a0) Stream removed, broadcasting: 3\nI0827 02:24:25.756932    3804 log.go:172] (0xc000a18a50) (0xc000a9e780) Stream removed, broadcasting: 5\n"
Aug 27 02:24:25.764: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:24:25.764: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:24:25.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:24:26.085: INFO: stderr: "I0827 02:24:26.023608    3824 log.go:172] (0xc0001142c0) (0xc0004ae5a0) Create stream\nI0827 02:24:26.023647    3824 log.go:172] (0xc0001142c0) (0xc0004ae5a0) Stream added, broadcasting: 1\nI0827 02:24:26.024933    3824 log.go:172] (0xc0001142c0) Reply frame received for 1\nI0827 02:24:26.024961    3824 log.go:172] (0xc0001142c0) (0xc000823360) Create stream\nI0827 02:24:26.024971    3824 log.go:172] (0xc0001142c0) (0xc000823360) Stream added, broadcasting: 3\nI0827 02:24:26.025490    3824 log.go:172] (0xc0001142c0) Reply frame received for 3\nI0827 02:24:26.025507    3824 log.go:172] (0xc0001142c0) (0xc0007860a0) Create stream\nI0827 02:24:26.025512    3824 log.go:172] (0xc0001142c0) (0xc0007860a0) Stream added, broadcasting: 5\nI0827 02:24:26.026043    3824 log.go:172] (0xc0001142c0) Reply frame received for 5\nI0827 02:24:26.079221    3824 log.go:172] (0xc0001142c0) Data frame received for 5\nI0827 02:24:26.079246    3824 log.go:172] (0xc0007860a0) (5) Data frame handling\nI0827 02:24:26.079259    3824 log.go:172] (0xc0007860a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:24:26.079275    3824 log.go:172] (0xc0001142c0) Data frame received for 3\nI0827 02:24:26.079297    3824 log.go:172] (0xc000823360) (3) Data frame handling\nI0827 02:24:26.079305    3824 log.go:172] (0xc000823360) (3) Data frame sent\nI0827 02:24:26.079312    3824 log.go:172] (0xc0001142c0) Data frame received for 3\nI0827 02:24:26.079317    3824 log.go:172] (0xc000823360) (3) Data frame handling\nI0827 02:24:26.079341    3824 log.go:172] (0xc0001142c0) Data frame received for 5\nI0827 02:24:26.079348    3824 log.go:172] (0xc0007860a0) (5) Data frame handling\nI0827 02:24:26.080449    3824 log.go:172] (0xc0001142c0) Data frame received for 1\nI0827 02:24:26.080486    3824 log.go:172] (0xc0004ae5a0) (1) Data frame handling\nI0827 02:24:26.080500    3824 log.go:172] (0xc0004ae5a0) (1) Data frame sent\nI0827 02:24:26.080515    3824 log.go:172] (0xc0001142c0) (0xc0004ae5a0) Stream removed, broadcasting: 1\nI0827 02:24:26.080527    3824 log.go:172] (0xc0001142c0) Go away received\nI0827 02:24:26.080897    3824 log.go:172] (0xc0001142c0) (0xc0004ae5a0) Stream removed, broadcasting: 1\nI0827 02:24:26.080910    3824 log.go:172] (0xc0001142c0) (0xc000823360) Stream removed, broadcasting: 3\nI0827 02:24:26.080916    3824 log.go:172] (0xc0001142c0) (0xc0007860a0) Stream removed, broadcasting: 5\n"
Aug 27 02:24:26.085: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:24:26.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:24:26.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5513 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 27 02:24:26.418: INFO: stderr: "I0827 02:24:26.271165    3845 log.go:172] (0xc000a942c0) (0xc000789b80) Create stream\nI0827 02:24:26.271217    3845 log.go:172] (0xc000a942c0) (0xc000789b80) Stream added, broadcasting: 1\nI0827 02:24:26.273011    3845 log.go:172] (0xc000a942c0) Reply frame received for 1\nI0827 02:24:26.273038    3845 log.go:172] (0xc000a942c0) (0xc0008ae000) Create stream\nI0827 02:24:26.273046    3845 log.go:172] (0xc000a942c0) (0xc0008ae000) Stream added, broadcasting: 3\nI0827 02:24:26.273863    3845 log.go:172] (0xc000a942c0) Reply frame received for 3\nI0827 02:24:26.273894    3845 log.go:172] (0xc000a942c0) (0xc0008ae0a0) Create stream\nI0827 02:24:26.273916    3845 log.go:172] (0xc000a942c0) (0xc0008ae0a0) Stream added, broadcasting: 5\nI0827 02:24:26.274922    3845 log.go:172] (0xc000a942c0) Reply frame received for 5\nI0827 02:24:26.329492    3845 log.go:172] (0xc000a942c0) Data frame received for 5\nI0827 02:24:26.329512    3845 log.go:172] (0xc0008ae0a0) (5) Data frame handling\nI0827 02:24:26.329523    3845 log.go:172] (0xc0008ae0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0827 02:24:26.407392    3845 log.go:172] (0xc000a942c0) Data frame received for 3\nI0827 02:24:26.407425    3845 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0827 02:24:26.407442    3845 log.go:172] (0xc0008ae000) (3) Data frame sent\nI0827 02:24:26.407450    3845 log.go:172] (0xc000a942c0) Data frame received for 3\nI0827 02:24:26.407457    3845 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0827 02:24:26.407592    3845 log.go:172] (0xc000a942c0) Data frame received for 5\nI0827 02:24:26.407616    3845 log.go:172] (0xc0008ae0a0) (5) Data frame handling\nI0827 02:24:26.413018    3845 log.go:172] (0xc000a942c0) Data frame received for 1\nI0827 02:24:26.413039    3845 log.go:172] (0xc000789b80) (1) Data frame handling\nI0827 02:24:26.413056    3845 log.go:172] (0xc000789b80) (1) Data frame sent\nI0827 02:24:26.413066    3845 log.go:172] (0xc000a942c0) (0xc000789b80) Stream removed, broadcasting: 1\nI0827 02:24:26.413080    3845 log.go:172] (0xc000a942c0) Go away received\nI0827 02:24:26.413474    3845 log.go:172] (0xc000a942c0) (0xc000789b80) Stream removed, broadcasting: 1\nI0827 02:24:26.413499    3845 log.go:172] (0xc000a942c0) (0xc0008ae000) Stream removed, broadcasting: 3\nI0827 02:24:26.413510    3845 log.go:172] (0xc000a942c0) (0xc0008ae0a0) Stream removed, broadcasting: 5\n"
Aug 27 02:24:26.419: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 27 02:24:26.419: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 27 02:24:26.419: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 02:24:56.701: INFO: Deleting all statefulset in ns statefulset-5513
Aug 27 02:24:56.704: INFO: Scaling statefulset ss to 0
Aug 27 02:24:56.716: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:24:56.717: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:24:56.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5513" for this suite.

• [SLOW TEST:105.113 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":237,"skipped":4025,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:24:56.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Aug 27 02:24:56.797: INFO: Waiting up to 5m0s for pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1" in namespace "var-expansion-5106" to be "success or failure"
Aug 27 02:24:56.801: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115796ms
Aug 27 02:24:58.916: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118632337s
Aug 27 02:25:00.946: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148482229s
Aug 27 02:25:03.041: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1": Phase="Running", Reason="", readiness=true. Elapsed: 6.244142069s
Aug 27 02:25:05.044: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.247259058s
STEP: Saw pod success
Aug 27 02:25:05.045: INFO: Pod "var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1" satisfied condition "success or failure"
Aug 27 02:25:05.047: INFO: Trying to get logs from node jerma-worker pod var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1 container dapi-container: 
STEP: delete the pod
Aug 27 02:25:05.125: INFO: Waiting for pod var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1 to disappear
Aug 27 02:25:05.127: INFO: Pod var-expansion-70a9f517-ddcd-432e-9a4f-a04f65a3e0a1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:25:05.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5106" for this suite.

• [SLOW TEST:8.404 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":4033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:25:05.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:25:05.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3560" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":239,"skipped":4057,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:25:05.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Aug 27 02:25:05.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1773'
Aug 27 02:25:05.696: INFO: stderr: ""
Aug 27 02:25:05.696: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 02:25:05.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:05.871: INFO: stderr: ""
Aug 27 02:25:05.871: INFO: stdout: "update-demo-nautilus-hswdn update-demo-nautilus-vc2gn "
Aug 27 02:25:05.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hswdn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:06.002: INFO: stderr: ""
Aug 27 02:25:06.002: INFO: stdout: ""
Aug 27 02:25:06.002: INFO: update-demo-nautilus-hswdn is created but not running
Aug 27 02:25:11.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:11.339: INFO: stderr: ""
Aug 27 02:25:11.339: INFO: stdout: "update-demo-nautilus-hswdn update-demo-nautilus-vc2gn "
Aug 27 02:25:11.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hswdn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:11.430: INFO: stderr: ""
Aug 27 02:25:11.430: INFO: stdout: ""
Aug 27 02:25:11.430: INFO: update-demo-nautilus-hswdn is created but not running
Aug 27 02:25:16.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:16.529: INFO: stderr: ""
Aug 27 02:25:16.529: INFO: stdout: "update-demo-nautilus-hswdn update-demo-nautilus-vc2gn "
Aug 27 02:25:16.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hswdn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:16.613: INFO: stderr: ""
Aug 27 02:25:16.613: INFO: stdout: "true"
Aug 27 02:25:16.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hswdn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:16.921: INFO: stderr: ""
Aug 27 02:25:16.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 02:25:16.921: INFO: validating pod update-demo-nautilus-hswdn
Aug 27 02:25:16.968: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 02:25:16.968: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 02:25:16.968: INFO: update-demo-nautilus-hswdn is verified up and running
Aug 27 02:25:16.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:17.060: INFO: stderr: ""
Aug 27 02:25:17.060: INFO: stdout: "true"
Aug 27 02:25:17.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:17.146: INFO: stderr: ""
Aug 27 02:25:17.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 02:25:17.146: INFO: validating pod update-demo-nautilus-vc2gn
Aug 27 02:25:17.149: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 02:25:17.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 02:25:17.149: INFO: update-demo-nautilus-vc2gn is verified up and running
STEP: scaling down the replication controller
Aug 27 02:25:17.151: INFO: scanned /root for discovery docs: 
Aug 27 02:25:17.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1773'
Aug 27 02:25:18.348: INFO: stderr: ""
Aug 27 02:25:18.348: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 02:25:18.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:18.453: INFO: stderr: ""
Aug 27 02:25:18.453: INFO: stdout: "update-demo-nautilus-hswdn update-demo-nautilus-vc2gn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 27 02:25:23.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:23.544: INFO: stderr: ""
Aug 27 02:25:23.544: INFO: stdout: "update-demo-nautilus-vc2gn "
Aug 27 02:25:23.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:23.636: INFO: stderr: ""
Aug 27 02:25:23.636: INFO: stdout: "true"
Aug 27 02:25:23.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:23.716: INFO: stderr: ""
Aug 27 02:25:23.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 02:25:23.716: INFO: validating pod update-demo-nautilus-vc2gn
Aug 27 02:25:23.719: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 02:25:23.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 02:25:23.719: INFO: update-demo-nautilus-vc2gn is verified up and running
STEP: scaling up the replication controller
Aug 27 02:25:23.721: INFO: scanned /root for discovery docs: 
Aug 27 02:25:23.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1773'
Aug 27 02:25:24.904: INFO: stderr: ""
Aug 27 02:25:24.904: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 27 02:25:24.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:25.005: INFO: stderr: ""
Aug 27 02:25:25.005: INFO: stdout: "update-demo-nautilus-9nlbn update-demo-nautilus-vc2gn "
Aug 27 02:25:25.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nlbn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:25.101: INFO: stderr: ""
Aug 27 02:25:25.101: INFO: stdout: ""
Aug 27 02:25:25.101: INFO: update-demo-nautilus-9nlbn is created but not running
Aug 27 02:25:30.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1773'
Aug 27 02:25:30.212: INFO: stderr: ""
Aug 27 02:25:30.212: INFO: stdout: "update-demo-nautilus-9nlbn update-demo-nautilus-vc2gn "
Aug 27 02:25:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nlbn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:30.306: INFO: stderr: ""
Aug 27 02:25:30.306: INFO: stdout: "true"
Aug 27 02:25:30.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9nlbn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:30.396: INFO: stderr: ""
Aug 27 02:25:30.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 02:25:30.397: INFO: validating pod update-demo-nautilus-9nlbn
Aug 27 02:25:30.400: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 02:25:30.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 02:25:30.400: INFO: update-demo-nautilus-9nlbn is verified up and running
Aug 27 02:25:30.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:30.494: INFO: stderr: ""
Aug 27 02:25:30.494: INFO: stdout: "true"
Aug 27 02:25:30.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc2gn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1773'
Aug 27 02:25:30.587: INFO: stderr: ""
Aug 27 02:25:30.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 27 02:25:30.587: INFO: validating pod update-demo-nautilus-vc2gn
Aug 27 02:25:30.589: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 27 02:25:30.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 27 02:25:30.589: INFO: update-demo-nautilus-vc2gn is verified up and running
STEP: using delete to clean up resources
Aug 27 02:25:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1773'
Aug 27 02:25:30.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 27 02:25:30.820: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 27 02:25:30.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1773'
Aug 27 02:25:30.934: INFO: stderr: "No resources found in kubectl-1773 namespace.\n"
Aug 27 02:25:30.934: INFO: stdout: ""
Aug 27 02:25:30.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 27 02:25:31.056: INFO: stderr: ""
Aug 27 02:25:31.056: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:25:31.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1773" for this suite.

• [SLOW TEST:25.871 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":240,"skipped":4084,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:25:31.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:25:31.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:25:40.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4400" for this suite.

• [SLOW TEST:9.315 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4110,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:25:40.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6411
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 02:25:40.873: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 02:26:13.185: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.62 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6411 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 02:26:13.185: INFO: >>> kubeConfig: /root/.kube/config
I0827 02:26:13.209778       6 log.go:172] (0xc002f789a0) (0xc001d51040) Create stream
I0827 02:26:13.209802       6 log.go:172] (0xc002f789a0) (0xc001d51040) Stream added, broadcasting: 1
I0827 02:26:13.211384       6 log.go:172] (0xc002f789a0) Reply frame received for 1
I0827 02:26:13.211415       6 log.go:172] (0xc002f789a0) (0xc00163e500) Create stream
I0827 02:26:13.211427       6 log.go:172] (0xc002f789a0) (0xc00163e500) Stream added, broadcasting: 3
I0827 02:26:13.212314       6 log.go:172] (0xc002f789a0) Reply frame received for 3
I0827 02:26:13.212331       6 log.go:172] (0xc002f789a0) (0xc001d510e0) Create stream
I0827 02:26:13.212336       6 log.go:172] (0xc002f789a0) (0xc001d510e0) Stream added, broadcasting: 5
I0827 02:26:13.213468       6 log.go:172] (0xc002f789a0) Reply frame received for 5
I0827 02:26:14.275089       6 log.go:172] (0xc002f789a0) Data frame received for 3
I0827 02:26:14.275124       6 log.go:172] (0xc00163e500) (3) Data frame handling
I0827 02:26:14.275155       6 log.go:172] (0xc00163e500) (3) Data frame sent
I0827 02:26:14.275421       6 log.go:172] (0xc002f789a0) Data frame received for 3
I0827 02:26:14.275466       6 log.go:172] (0xc00163e500) (3) Data frame handling
I0827 02:26:14.275718       6 log.go:172] (0xc002f789a0) Data frame received for 5
I0827 02:26:14.275738       6 log.go:172] (0xc001d510e0) (5) Data frame handling
I0827 02:26:14.277594       6 log.go:172] (0xc002f789a0) Data frame received for 1
I0827 02:26:14.277618       6 log.go:172] (0xc001d51040) (1) Data frame handling
I0827 02:26:14.277631       6 log.go:172] (0xc001d51040) (1) Data frame sent
I0827 02:26:14.277650       6 log.go:172] (0xc002f789a0) (0xc001d51040) Stream removed, broadcasting: 1
I0827 02:26:14.277678       6 log.go:172] (0xc002f789a0) Go away received
I0827 02:26:14.277792       6 log.go:172] (0xc002f789a0) (0xc001d51040) Stream removed, broadcasting: 1
I0827 02:26:14.277815       6 log.go:172] (0xc002f789a0) (0xc00163e500) Stream removed, broadcasting: 3
I0827 02:26:14.277824       6 log.go:172] (0xc002f789a0) (0xc001d510e0) Stream removed, broadcasting: 5
Aug 27 02:26:14.277: INFO: Found all expected endpoints: [netserver-0]
Aug 27 02:26:14.359: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.220 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6411 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 02:26:14.359: INFO: >>> kubeConfig: /root/.kube/config
I0827 02:26:14.651545       6 log.go:172] (0xc002f78f20) (0xc001d519a0) Create stream
I0827 02:26:14.651579       6 log.go:172] (0xc002f78f20) (0xc001d519a0) Stream added, broadcasting: 1
I0827 02:26:14.653523       6 log.go:172] (0xc002f78f20) Reply frame received for 1
I0827 02:26:14.653556       6 log.go:172] (0xc002f78f20) (0xc00163e640) Create stream
I0827 02:26:14.653565       6 log.go:172] (0xc002f78f20) (0xc00163e640) Stream added, broadcasting: 3
I0827 02:26:14.660410       6 log.go:172] (0xc002f78f20) Reply frame received for 3
I0827 02:26:14.660443       6 log.go:172] (0xc002f78f20) (0xc0015cc460) Create stream
I0827 02:26:14.660451       6 log.go:172] (0xc002f78f20) (0xc0015cc460) Stream added, broadcasting: 5
I0827 02:26:14.661389       6 log.go:172] (0xc002f78f20) Reply frame received for 5
I0827 02:26:15.739077       6 log.go:172] (0xc002f78f20) Data frame received for 5
I0827 02:26:15.739124       6 log.go:172] (0xc0015cc460) (5) Data frame handling
I0827 02:26:15.739184       6 log.go:172] (0xc002f78f20) Data frame received for 3
I0827 02:26:15.739231       6 log.go:172] (0xc00163e640) (3) Data frame handling
I0827 02:26:15.739262       6 log.go:172] (0xc00163e640) (3) Data frame sent
I0827 02:26:15.739287       6 log.go:172] (0xc002f78f20) Data frame received for 3
I0827 02:26:15.739306       6 log.go:172] (0xc00163e640) (3) Data frame handling
I0827 02:26:15.741490       6 log.go:172] (0xc002f78f20) Data frame received for 1
I0827 02:26:15.741547       6 log.go:172] (0xc001d519a0) (1) Data frame handling
I0827 02:26:15.741597       6 log.go:172] (0xc001d519a0) (1) Data frame sent
I0827 02:26:15.741624       6 log.go:172] (0xc002f78f20) (0xc001d519a0) Stream removed, broadcasting: 1
I0827 02:26:15.741659       6 log.go:172] (0xc002f78f20) Go away received
I0827 02:26:15.741769       6 log.go:172] (0xc002f78f20) (0xc001d519a0) Stream removed, broadcasting: 1
I0827 02:26:15.741794       6 log.go:172] (0xc002f78f20) (0xc00163e640) Stream removed, broadcasting: 3
I0827 02:26:15.741816       6 log.go:172] (0xc002f78f20) (0xc0015cc460) Stream removed, broadcasting: 5
Aug 27 02:26:15.741: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:26:15.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6411" for this suite.

• [SLOW TEST:35.374 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4110,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:26:15.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 27 02:26:25.306: INFO: Pod name wrapped-volume-race-8057d0bb-4809-4ffc-bc1d-d1508d4d72ef: Found 0 pods out of 5
Aug 27 02:26:30.796: INFO: Pod name wrapped-volume-race-8057d0bb-4809-4ffc-bc1d-d1508d4d72ef: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8057d0bb-4809-4ffc-bc1d-d1508d4d72ef in namespace emptydir-wrapper-1322, will wait for the garbage collector to delete the pods
Aug 27 02:26:54.020: INFO: Deleting ReplicationController wrapped-volume-race-8057d0bb-4809-4ffc-bc1d-d1508d4d72ef took: 113.096075ms
Aug 27 02:26:54.320: INFO: Terminating ReplicationController wrapped-volume-race-8057d0bb-4809-4ffc-bc1d-d1508d4d72ef pods took: 300.251697ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 02:27:05.078: INFO: Pod name wrapped-volume-race-19f9292e-d6bf-4e67-9d90-cc8fab9950e1: Found 0 pods out of 5
Aug 27 02:27:10.085: INFO: Pod name wrapped-volume-race-19f9292e-d6bf-4e67-9d90-cc8fab9950e1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-19f9292e-d6bf-4e67-9d90-cc8fab9950e1 in namespace emptydir-wrapper-1322, will wait for the garbage collector to delete the pods
Aug 27 02:27:26.443: INFO: Deleting ReplicationController wrapped-volume-race-19f9292e-d6bf-4e67-9d90-cc8fab9950e1 took: 71.573768ms
Aug 27 02:27:27.343: INFO: Terminating ReplicationController wrapped-volume-race-19f9292e-d6bf-4e67-9d90-cc8fab9950e1 pods took: 900.260635ms
STEP: Creating RC which spawns configmap-volume pods
Aug 27 02:27:41.977: INFO: Pod name wrapped-volume-race-39297a79-58eb-40c7-a3fb-be842f3e512a: Found 0 pods out of 5
Aug 27 02:27:46.982: INFO: Pod name wrapped-volume-race-39297a79-58eb-40c7-a3fb-be842f3e512a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-39297a79-58eb-40c7-a3fb-be842f3e512a in namespace emptydir-wrapper-1322, will wait for the garbage collector to delete the pods
Aug 27 02:28:10.069: INFO: Deleting ReplicationController wrapped-volume-race-39297a79-58eb-40c7-a3fb-be842f3e512a took: 6.590836ms
Aug 27 02:28:10.469: INFO: Terminating ReplicationController wrapped-volume-race-39297a79-58eb-40c7-a3fb-be842f3e512a pods took: 400.342562ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:28.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1322" for this suite.

• [SLOW TEST:133.073 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":243,"skipped":4121,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:28.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:28:30.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 02:28:32.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:28:34.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092110, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:28:37.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:28:37.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1280-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:38.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8132" for this suite.
STEP: Destroying namespace "webhook-8132-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.644 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":244,"skipped":4133,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:39.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:28:39.715: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:47.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6563" for this suite.

• [SLOW TEST:8.193 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":245,"skipped":4156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:47.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-1eff8b84-168a-4c3b-b328-d5b8408bf94b
STEP: Creating a pod to test consume configMaps
Aug 27 02:28:47.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda" in namespace "configmap-2663" to be "success or failure"
Aug 27 02:28:47.974: INFO: Pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 85.345591ms
Aug 27 02:28:50.122: INFO: Pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233387732s
Aug 27 02:28:52.125: INFO: Pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda": Phase="Running", Reason="", readiness=true. Elapsed: 4.236901435s
Aug 27 02:28:54.163: INFO: Pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274698885s
STEP: Saw pod success
Aug 27 02:28:54.163: INFO: Pod "pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda" satisfied condition "success or failure"
Aug 27 02:28:54.166: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda container configmap-volume-test: 
STEP: delete the pod
Aug 27 02:28:54.249: INFO: Waiting for pod pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda to disappear
Aug 27 02:28:54.354: INFO: Pod pod-configmaps-27b79f7d-2497-49a1-b7e4-de3531f7dcda no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:54.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2663" for this suite.

• [SLOW TEST:6.697 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4178,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:54.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5068" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":247,"skipped":4184,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:54.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 27 02:28:54.689: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-9299 /api/v1/namespaces/watch-9299/configmaps/e2e-watch-test-watch-closed 343dfb25-b272-4748-b6da-d86a7b94c8e7 4100391 0 2020-08-27 02:28:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 02:28:54.689: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-9299 /api/v1/namespaces/watch-9299/configmaps/e2e-watch-test-watch-closed 343dfb25-b272-4748-b6da-d86a7b94c8e7 4100392 0 2020-08-27 02:28:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 27 02:28:54.775: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-9299 /api/v1/namespaces/watch-9299/configmaps/e2e-watch-test-watch-closed 343dfb25-b272-4748-b6da-d86a7b94c8e7 4100393 0 2020-08-27 02:28:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 02:28:54.776: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-9299 /api/v1/namespaces/watch-9299/configmaps/e2e-watch-test-watch-closed 343dfb25-b272-4748-b6da-d86a7b94c8e7 4100394 0 2020-08-27 02:28:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:28:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9299" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":248,"skipped":4184,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:28:54.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:28:55.916: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 02:28:58.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092136, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:29:00.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092136, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092135, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:29:03.119: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:29:04.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6871" for this suite.
STEP: Destroying namespace "webhook-6871-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.839 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":249,"skipped":4200,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:29:04.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:29:05.930: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 02:29:07.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092145, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:29:10.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092145, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:29:11.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092146, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092145, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:29:15.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 27 02:29:21.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3460 to-be-attached-pod -i -c=container1'
Aug 27 02:29:21.725: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:29:21.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3460" for this suite.
STEP: Destroying namespace "webhook-3460-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.785 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":250,"skipped":4219,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:29:22.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 27 02:29:22.757: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 02:29:23.506: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 02:29:23.517: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 27 02:29:23.522: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.522: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 02:29:23.522: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.522: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:29:23.522: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.522: INFO: 	Container app ready: true, restart count 0
Aug 27 02:29:23.522: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 27 02:29:23.535: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.535: INFO: 	Container httpd ready: true, restart count 0
Aug 27 02:29:23.535: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.535: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:29:23.535: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.535: INFO: 	Container app ready: true, restart count 0
Aug 27 02:29:23.535: INFO: to-be-attached-pod from webhook-3460 started at 2020-08-27 02:29:15 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.535: INFO: 	Container container1 ready: true, restart count 0
Aug 27 02:29:23.535: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:29:23.535: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0b776656-eb19-4b4b-b5b9-df35fb522f0e 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-0b776656-eb19-4b4b-b5b9-df35fb522f0e off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0b776656-eb19-4b4b-b5b9-df35fb522f0e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:29:35.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3850" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:13.506 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":251,"skipped":4227,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:29:35.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:29:36.172: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 27 02:29:38.260: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:29:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3444" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":252,"skipped":4241,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:29:39.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 27 02:29:48.791: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:29:48.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6729" for this suite.

• [SLOW TEST:9.111 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4257,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:29:48.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:29:48.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7215
I0827 02:29:48.931890       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7215, replica count: 1
I0827 02:29:49.982328       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:29:50.982574       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:29:51.982870       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0827 02:29:52.983104       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 27 02:29:53.130: INFO: Created: latency-svc-kflts
Aug 27 02:29:53.160: INFO: Got endpoints: latency-svc-kflts [76.866634ms]
Aug 27 02:29:53.240: INFO: Created: latency-svc-qlmjn
Aug 27 02:29:53.257: INFO: Got endpoints: latency-svc-qlmjn [96.864063ms]
Aug 27 02:29:53.276: INFO: Created: latency-svc-kzq4q
Aug 27 02:29:53.284: INFO: Got endpoints: latency-svc-kzq4q [124.27169ms]
Aug 27 02:29:53.337: INFO: Created: latency-svc-p5lqz
Aug 27 02:29:53.393: INFO: Got endpoints: latency-svc-p5lqz [233.19487ms]
Aug 27 02:29:53.505: INFO: Created: latency-svc-srdlz
Aug 27 02:29:53.517: INFO: Got endpoints: latency-svc-srdlz [357.180163ms]
Aug 27 02:29:53.553: INFO: Created: latency-svc-c6hvq
Aug 27 02:29:53.570: INFO: Got endpoints: latency-svc-c6hvq [410.369079ms]
Aug 27 02:29:53.604: INFO: Created: latency-svc-84fsg
Aug 27 02:29:53.649: INFO: Got endpoints: latency-svc-84fsg [489.266283ms]
Aug 27 02:29:53.657: INFO: Created: latency-svc-26ppd
Aug 27 02:29:53.677: INFO: Got endpoints: latency-svc-26ppd [517.267419ms]
Aug 27 02:29:53.718: INFO: Created: latency-svc-6hdvq
Aug 27 02:29:53.735: INFO: Got endpoints: latency-svc-6hdvq [574.709468ms]
Aug 27 02:29:53.817: INFO: Created: latency-svc-lvtr4
Aug 27 02:29:53.841: INFO: Got endpoints: latency-svc-lvtr4 [680.583025ms]
Aug 27 02:29:53.874: INFO: Created: latency-svc-tc7th
Aug 27 02:29:53.891: INFO: Got endpoints: latency-svc-tc7th [731.556644ms]
Aug 27 02:29:53.918: INFO: Created: latency-svc-qmw48
Aug 27 02:29:54.014: INFO: Got endpoints: latency-svc-qmw48 [854.165757ms]
Aug 27 02:29:54.015: INFO: Created: latency-svc-gjzhf
Aug 27 02:29:54.034: INFO: Got endpoints: latency-svc-gjzhf [873.919778ms]
Aug 27 02:29:54.108: INFO: Created: latency-svc-hmnqk
Aug 27 02:29:54.175: INFO: Got endpoints: latency-svc-hmnqk [1.015632749s]
Aug 27 02:29:54.213: INFO: Created: latency-svc-nvjdv
Aug 27 02:29:54.222: INFO: Got endpoints: latency-svc-nvjdv [1.062180812s]
Aug 27 02:29:54.276: INFO: Created: latency-svc-7txdl
Aug 27 02:29:54.326: INFO: Got endpoints: latency-svc-7txdl [1.16601696s]
Aug 27 02:29:54.369: INFO: Created: latency-svc-wldqr
Aug 27 02:29:54.386: INFO: Got endpoints: latency-svc-wldqr [1.128915267s]
Aug 27 02:29:54.405: INFO: Created: latency-svc-nmnqv
Aug 27 02:29:54.523: INFO: Got endpoints: latency-svc-nmnqv [1.238998673s]
Aug 27 02:29:54.605: INFO: Created: latency-svc-xmj4d
Aug 27 02:29:54.621: INFO: Got endpoints: latency-svc-xmj4d [1.227625324s]
Aug 27 02:29:54.681: INFO: Created: latency-svc-xmhnj
Aug 27 02:29:54.705: INFO: Got endpoints: latency-svc-xmhnj [1.188098797s]
Aug 27 02:29:54.738: INFO: Created: latency-svc-vqnvz
Aug 27 02:29:54.753: INFO: Got endpoints: latency-svc-vqnvz [1.182268803s]
Aug 27 02:29:54.847: INFO: Created: latency-svc-2j2v7
Aug 27 02:29:54.849: INFO: Got endpoints: latency-svc-2j2v7 [1.199836513s]
Aug 27 02:29:54.918: INFO: Created: latency-svc-s4lsk
Aug 27 02:29:54.937: INFO: Got endpoints: latency-svc-s4lsk [1.260025263s]
Aug 27 02:29:54.984: INFO: Created: latency-svc-54lzf
Aug 27 02:29:55.009: INFO: Got endpoints: latency-svc-54lzf [1.27397449s]
Aug 27 02:29:55.158: INFO: Created: latency-svc-w99zf
Aug 27 02:29:55.162: INFO: Got endpoints: latency-svc-w99zf [1.321478139s]
Aug 27 02:29:55.206: INFO: Created: latency-svc-4z6gm
Aug 27 02:29:55.221: INFO: Got endpoints: latency-svc-4z6gm [1.3297642s]
Aug 27 02:29:55.314: INFO: Created: latency-svc-j22gv
Aug 27 02:29:55.330: INFO: Got endpoints: latency-svc-j22gv [1.316039871s]
Aug 27 02:29:55.356: INFO: Created: latency-svc-db5kt
Aug 27 02:29:55.372: INFO: Got endpoints: latency-svc-db5kt [1.33770536s]
Aug 27 02:29:55.407: INFO: Created: latency-svc-hd85r
Aug 27 02:29:55.464: INFO: Got endpoints: latency-svc-hd85r [1.288012558s]
Aug 27 02:29:55.480: INFO: Created: latency-svc-x2j62
Aug 27 02:29:55.498: INFO: Got endpoints: latency-svc-x2j62 [1.276023939s]
Aug 27 02:29:55.518: INFO: Created: latency-svc-pdq5l
Aug 27 02:29:55.537: INFO: Got endpoints: latency-svc-pdq5l [1.211367836s]
Aug 27 02:29:55.757: INFO: Created: latency-svc-gjgg8
Aug 27 02:29:55.763: INFO: Got endpoints: latency-svc-gjgg8 [1.377147488s]
Aug 27 02:29:55.916: INFO: Created: latency-svc-6d6fc
Aug 27 02:29:55.946: INFO: Got endpoints: latency-svc-6d6fc [1.422069834s]
Aug 27 02:29:56.172: INFO: Created: latency-svc-wjf48
Aug 27 02:29:56.191: INFO: Got endpoints: latency-svc-wjf48 [1.570182571s]
Aug 27 02:29:56.470: INFO: Created: latency-svc-gnsqv
Aug 27 02:29:56.690: INFO: Got endpoints: latency-svc-gnsqv [1.985274812s]
Aug 27 02:29:56.692: INFO: Created: latency-svc-7dwr6
Aug 27 02:29:56.695: INFO: Got endpoints: latency-svc-7dwr6 [1.94269442s]
Aug 27 02:29:56.882: INFO: Created: latency-svc-77k95
Aug 27 02:29:56.929: INFO: Created: latency-svc-ntmng
Aug 27 02:29:56.929: INFO: Got endpoints: latency-svc-77k95 [2.080323025s]
Aug 27 02:29:57.533: INFO: Got endpoints: latency-svc-ntmng [2.595652326s]
Aug 27 02:29:57.880: INFO: Created: latency-svc-z544p
Aug 27 02:29:57.932: INFO: Got endpoints: latency-svc-z544p [2.923380009s]
Aug 27 02:29:58.068: INFO: Created: latency-svc-fkshv
Aug 27 02:29:58.109: INFO: Got endpoints: latency-svc-fkshv [2.946983603s]
Aug 27 02:29:58.168: INFO: Created: latency-svc-btfn2
Aug 27 02:29:58.463: INFO: Got endpoints: latency-svc-btfn2 [3.24119126s]
Aug 27 02:29:58.636: INFO: Created: latency-svc-fjqbd
Aug 27 02:29:58.669: INFO: Got endpoints: latency-svc-fjqbd [3.339106153s]
Aug 27 02:29:58.847: INFO: Created: latency-svc-mmhgr
Aug 27 02:29:58.850: INFO: Got endpoints: latency-svc-mmhgr [3.478542237s]
Aug 27 02:29:58.907: INFO: Created: latency-svc-5zc8j
Aug 27 02:29:58.925: INFO: Got endpoints: latency-svc-5zc8j [3.46094029s]
Aug 27 02:29:59.020: INFO: Created: latency-svc-l8tfn
Aug 27 02:29:59.023: INFO: Got endpoints: latency-svc-l8tfn [3.524581808s]
Aug 27 02:29:59.058: INFO: Created: latency-svc-2jxbf
Aug 27 02:29:59.072: INFO: Got endpoints: latency-svc-2jxbf [3.534204829s]
Aug 27 02:29:59.096: INFO: Created: latency-svc-2ntww
Aug 27 02:29:59.114: INFO: Got endpoints: latency-svc-2ntww [3.351411964s]
Aug 27 02:29:59.170: INFO: Created: latency-svc-8b6ft
Aug 27 02:29:59.187: INFO: Got endpoints: latency-svc-8b6ft [3.241178836s]
Aug 27 02:29:59.229: INFO: Created: latency-svc-85jnq
Aug 27 02:29:59.246: INFO: Got endpoints: latency-svc-85jnq [3.055193382s]
Aug 27 02:29:59.332: INFO: Created: latency-svc-xdsrb
Aug 27 02:29:59.335: INFO: Got endpoints: latency-svc-xdsrb [2.644202491s]
Aug 27 02:29:59.394: INFO: Created: latency-svc-n64th
Aug 27 02:29:59.409: INFO: Got endpoints: latency-svc-n64th [2.713191993s]
Aug 27 02:29:59.429: INFO: Created: latency-svc-znkwk
Aug 27 02:29:59.487: INFO: Got endpoints: latency-svc-znkwk [2.557648883s]
Aug 27 02:29:59.498: INFO: Created: latency-svc-tl8nh
Aug 27 02:29:59.537: INFO: Got endpoints: latency-svc-tl8nh [2.003769444s]
Aug 27 02:29:59.555: INFO: Created: latency-svc-whc8g
Aug 27 02:29:59.563: INFO: Got endpoints: latency-svc-whc8g [1.631215429s]
Aug 27 02:29:59.583: INFO: Created: latency-svc-lnrqw
Aug 27 02:29:59.585: INFO: Got endpoints: latency-svc-lnrqw [1.475458023s]
Aug 27 02:29:59.639: INFO: Created: latency-svc-kzttj
Aug 27 02:29:59.651: INFO: Got endpoints: latency-svc-kzttj [1.188350419s]
Aug 27 02:29:59.673: INFO: Created: latency-svc-2r8xj
Aug 27 02:29:59.690: INFO: Got endpoints: latency-svc-2r8xj [1.020436396s]
Aug 27 02:29:59.714: INFO: Created: latency-svc-5br69
Aug 27 02:29:59.732: INFO: Got endpoints: latency-svc-5br69 [881.91243ms]
Aug 27 02:29:59.801: INFO: Created: latency-svc-gq2d5
Aug 27 02:29:59.828: INFO: Got endpoints: latency-svc-gq2d5 [903.491664ms]
Aug 27 02:29:59.930: INFO: Created: latency-svc-nbbck
Aug 27 02:29:59.933: INFO: Got endpoints: latency-svc-nbbck [910.52004ms]
Aug 27 02:29:59.975: INFO: Created: latency-svc-rk5xt
Aug 27 02:30:00.116: INFO: Got endpoints: latency-svc-rk5xt [1.043826864s]
Aug 27 02:30:00.128: INFO: Created: latency-svc-qzqqk
Aug 27 02:30:00.141: INFO: Got endpoints: latency-svc-qzqqk [1.026540487s]
Aug 27 02:30:00.171: INFO: Created: latency-svc-j62ct
Aug 27 02:30:00.181: INFO: Got endpoints: latency-svc-j62ct [993.916001ms]
Aug 27 02:30:00.209: INFO: Created: latency-svc-dqmzb
Aug 27 02:30:00.212: INFO: Got endpoints: latency-svc-dqmzb [966.002378ms]
Aug 27 02:30:00.327: INFO: Created: latency-svc-b8f7q
Aug 27 02:30:00.346: INFO: Got endpoints: latency-svc-b8f7q [1.01097235s]
Aug 27 02:30:00.427: INFO: Created: latency-svc-x6shf
Aug 27 02:30:00.430: INFO: Got endpoints: latency-svc-x6shf [1.021507215s]
Aug 27 02:30:00.517: INFO: Created: latency-svc-wrqg7
Aug 27 02:30:00.577: INFO: Got endpoints: latency-svc-wrqg7 [1.089576441s]
Aug 27 02:30:00.591: INFO: Created: latency-svc-226fs
Aug 27 02:30:00.638: INFO: Got endpoints: latency-svc-226fs [1.100617355s]
Aug 27 02:30:00.676: INFO: Created: latency-svc-f9x5t
Aug 27 02:30:00.739: INFO: Got endpoints: latency-svc-f9x5t [1.175558041s]
Aug 27 02:30:00.757: INFO: Created: latency-svc-rv4m8
Aug 27 02:30:00.766: INFO: Got endpoints: latency-svc-rv4m8 [1.180567843s]
Aug 27 02:30:00.786: INFO: Created: latency-svc-2g2lg
Aug 27 02:30:00.808: INFO: Got endpoints: latency-svc-2g2lg [1.157068645s]
Aug 27 02:30:00.907: INFO: Created: latency-svc-tq75p
Aug 27 02:30:00.910: INFO: Got endpoints: latency-svc-tq75p [1.220433916s]
Aug 27 02:30:00.940: INFO: Created: latency-svc-qmsv2
Aug 27 02:30:00.958: INFO: Got endpoints: latency-svc-qmsv2 [1.226046035s]
Aug 27 02:30:00.984: INFO: Created: latency-svc-htn7x
Aug 27 02:30:01.001: INFO: Got endpoints: latency-svc-htn7x [1.172633954s]
Aug 27 02:30:01.050: INFO: Created: latency-svc-hglwq
Aug 27 02:30:01.052: INFO: Got endpoints: latency-svc-hglwq [1.118568211s]
Aug 27 02:30:01.087: INFO: Created: latency-svc-45sxg
Aug 27 02:30:01.099: INFO: Got endpoints: latency-svc-45sxg [983.745131ms]
Aug 27 02:30:01.120: INFO: Created: latency-svc-zvjng
Aug 27 02:30:01.134: INFO: Got endpoints: latency-svc-zvjng [992.757371ms]
Aug 27 02:30:01.194: INFO: Created: latency-svc-qqgr9
Aug 27 02:30:01.197: INFO: Got endpoints: latency-svc-qqgr9 [1.016531421s]
Aug 27 02:30:01.230: INFO: Created: latency-svc-xb2dn
Aug 27 02:30:01.248: INFO: Got endpoints: latency-svc-xb2dn [1.03538057s]
Aug 27 02:30:01.282: INFO: Created: latency-svc-9tjxt
Aug 27 02:30:01.355: INFO: Got endpoints: latency-svc-9tjxt [1.009648507s]
Aug 27 02:30:01.357: INFO: Created: latency-svc-2xbcp
Aug 27 02:30:01.381: INFO: Got endpoints: latency-svc-2xbcp [950.346976ms]
Aug 27 02:30:01.547: INFO: Created: latency-svc-77hrs
Aug 27 02:30:01.557: INFO: Got endpoints: latency-svc-77hrs [980.692258ms]
Aug 27 02:30:01.590: INFO: Created: latency-svc-skkwj
Aug 27 02:30:01.614: INFO: Got endpoints: latency-svc-skkwj [976.696126ms]
Aug 27 02:30:01.645: INFO: Created: latency-svc-25vsp
Aug 27 02:30:01.691: INFO: Got endpoints: latency-svc-25vsp [951.903714ms]
Aug 27 02:30:01.708: INFO: Created: latency-svc-8kc4k
Aug 27 02:30:01.723: INFO: Got endpoints: latency-svc-8kc4k [957.498752ms]
Aug 27 02:30:01.744: INFO: Created: latency-svc-x67zw
Aug 27 02:30:01.759: INFO: Got endpoints: latency-svc-x67zw [950.961805ms]
Aug 27 02:30:01.871: INFO: Created: latency-svc-q564r
Aug 27 02:30:01.874: INFO: Got endpoints: latency-svc-q564r [963.929074ms]
Aug 27 02:30:02.188: INFO: Created: latency-svc-btwpq
Aug 27 02:30:02.192: INFO: Got endpoints: latency-svc-btwpq [1.23396855s]
Aug 27 02:30:02.645: INFO: Created: latency-svc-sspqg
Aug 27 02:30:02.911: INFO: Got endpoints: latency-svc-sspqg [1.910274855s]
Aug 27 02:30:03.135: INFO: Created: latency-svc-6vxlc
Aug 27 02:30:03.161: INFO: Got endpoints: latency-svc-6vxlc [2.108655309s]
Aug 27 02:30:03.488: INFO: Created: latency-svc-p9d4q
Aug 27 02:30:03.552: INFO: Got endpoints: latency-svc-p9d4q [2.45266073s]
Aug 27 02:30:03.673: INFO: Created: latency-svc-ft657
Aug 27 02:30:03.677: INFO: Got endpoints: latency-svc-ft657 [2.543297611s]
Aug 27 02:30:03.968: INFO: Created: latency-svc-kmkcv
Aug 27 02:30:03.984: INFO: Got endpoints: latency-svc-kmkcv [2.786575693s]
Aug 27 02:30:04.158: INFO: Created: latency-svc-5r64d
Aug 27 02:30:04.161: INFO: Got endpoints: latency-svc-5r64d [2.912932578s]
Aug 27 02:30:04.343: INFO: Created: latency-svc-89lwx
Aug 27 02:30:04.347: INFO: Got endpoints: latency-svc-89lwx [2.991905693s]
Aug 27 02:30:04.391: INFO: Created: latency-svc-lmbf2
Aug 27 02:30:04.397: INFO: Got endpoints: latency-svc-lmbf2 [3.016698637s]
Aug 27 02:30:04.421: INFO: Created: latency-svc-hggq2
Aug 27 02:30:04.428: INFO: Got endpoints: latency-svc-hggq2 [2.870131925s]
Aug 27 02:30:04.487: INFO: Created: latency-svc-575wn
Aug 27 02:30:04.535: INFO: Got endpoints: latency-svc-575wn [2.920310053s]
Aug 27 02:30:04.580: INFO: Created: latency-svc-2544z
Aug 27 02:30:04.619: INFO: Got endpoints: latency-svc-2544z [2.927842659s]
Aug 27 02:30:04.642: INFO: Created: latency-svc-ctdw2
Aug 27 02:30:04.657: INFO: Got endpoints: latency-svc-ctdw2 [2.933402841s]
Aug 27 02:30:04.678: INFO: Created: latency-svc-zm62m
Aug 27 02:30:04.693: INFO: Got endpoints: latency-svc-zm62m [2.933336826s]
Aug 27 02:30:04.715: INFO: Created: latency-svc-v5826
Aug 27 02:30:04.757: INFO: Got endpoints: latency-svc-v5826 [2.882169232s]
Aug 27 02:30:04.777: INFO: Created: latency-svc-qjlw6
Aug 27 02:30:04.813: INFO: Got endpoints: latency-svc-qjlw6 [2.620729212s]
Aug 27 02:30:04.905: INFO: Created: latency-svc-42c6x
Aug 27 02:30:04.915: INFO: Got endpoints: latency-svc-42c6x [2.003804672s]
Aug 27 02:30:04.967: INFO: Created: latency-svc-56fkw
Aug 27 02:30:04.987: INFO: Got endpoints: latency-svc-56fkw [1.826673302s]
Aug 27 02:30:05.038: INFO: Created: latency-svc-hv6fk
Aug 27 02:30:05.041: INFO: Got endpoints: latency-svc-hv6fk [1.489270065s]
Aug 27 02:30:05.078: INFO: Created: latency-svc-gmhdr
Aug 27 02:30:05.096: INFO: Got endpoints: latency-svc-gmhdr [1.418625964s]
Aug 27 02:30:05.184: INFO: Created: latency-svc-z58km
Aug 27 02:30:05.186: INFO: Got endpoints: latency-svc-z58km [1.202452287s]
Aug 27 02:30:05.213: INFO: Created: latency-svc-p9nll
Aug 27 02:30:05.229: INFO: Got endpoints: latency-svc-p9nll [1.067593899s]
Aug 27 02:30:05.264: INFO: Created: latency-svc-67r6k
Aug 27 02:30:05.319: INFO: Got endpoints: latency-svc-67r6k [971.786194ms]
Aug 27 02:30:05.351: INFO: Created: latency-svc-wfnpw
Aug 27 02:30:05.367: INFO: Got endpoints: latency-svc-wfnpw [969.145799ms]
Aug 27 02:30:05.393: INFO: Created: latency-svc-lvlvv
Aug 27 02:30:05.409: INFO: Got endpoints: latency-svc-lvlvv [981.168497ms]
Aug 27 02:30:05.463: INFO: Created: latency-svc-84wps
Aug 27 02:30:05.475: INFO: Got endpoints: latency-svc-84wps [940.136253ms]
Aug 27 02:30:05.507: INFO: Created: latency-svc-n7x52
Aug 27 02:30:05.524: INFO: Got endpoints: latency-svc-n7x52 [904.797947ms]
Aug 27 02:30:05.542: INFO: Created: latency-svc-jvk5t
Aug 27 02:30:05.559: INFO: Got endpoints: latency-svc-jvk5t [902.65121ms]
Aug 27 02:30:05.642: INFO: Created: latency-svc-wrrlc
Aug 27 02:30:05.661: INFO: Got endpoints: latency-svc-wrrlc [968.897026ms]
Aug 27 02:30:05.689: INFO: Created: latency-svc-qbsv6
Aug 27 02:30:05.704: INFO: Got endpoints: latency-svc-qbsv6 [947.189551ms]
Aug 27 02:30:05.726: INFO: Created: latency-svc-jktf8
Aug 27 02:30:06.164: INFO: Got endpoints: latency-svc-jktf8 [1.350367657s]
Aug 27 02:30:06.193: INFO: Created: latency-svc-lsjnz
Aug 27 02:30:06.220: INFO: Got endpoints: latency-svc-lsjnz [1.305344588s]
Aug 27 02:30:06.326: INFO: Created: latency-svc-fdd25
Aug 27 02:30:06.332: INFO: Got endpoints: latency-svc-fdd25 [1.344229046s]
Aug 27 02:30:06.413: INFO: Created: latency-svc-tlvcm
Aug 27 02:30:06.577: INFO: Got endpoints: latency-svc-tlvcm [1.535520195s]
Aug 27 02:30:06.605: INFO: Created: latency-svc-pcc57
Aug 27 02:30:06.616: INFO: Got endpoints: latency-svc-pcc57 [1.519702723s]
Aug 27 02:30:06.714: INFO: Created: latency-svc-zg6vv
Aug 27 02:30:06.718: INFO: Got endpoints: latency-svc-zg6vv [1.531272674s]
Aug 27 02:30:06.770: INFO: Created: latency-svc-vv854
Aug 27 02:30:06.780: INFO: Got endpoints: latency-svc-vv854 [1.551219321s]
Aug 27 02:30:06.852: INFO: Created: latency-svc-bhwcd
Aug 27 02:30:06.855: INFO: Got endpoints: latency-svc-bhwcd [1.536067082s]
Aug 27 02:30:06.926: INFO: Created: latency-svc-zc4sp
Aug 27 02:30:07.002: INFO: Got endpoints: latency-svc-zc4sp [1.635245056s]
Aug 27 02:30:07.037: INFO: Created: latency-svc-ddgg6
Aug 27 02:30:07.235: INFO: Got endpoints: latency-svc-ddgg6 [1.82656957s]
Aug 27 02:30:07.244: INFO: Created: latency-svc-dkbpm
Aug 27 02:30:07.315: INFO: Got endpoints: latency-svc-dkbpm [1.83966487s]
Aug 27 02:30:07.430: INFO: Created: latency-svc-6hrhh
Aug 27 02:30:07.447: INFO: Got endpoints: latency-svc-6hrhh [1.923122158s]
Aug 27 02:30:07.555: INFO: Created: latency-svc-hhwmx
Aug 27 02:30:07.567: INFO: Got endpoints: latency-svc-hhwmx [2.007695223s]
Aug 27 02:30:07.607: INFO: Created: latency-svc-mlfxt
Aug 27 02:30:07.633: INFO: Got endpoints: latency-svc-mlfxt [1.971601383s]
Aug 27 02:30:07.721: INFO: Created: latency-svc-9jldr
Aug 27 02:30:07.751: INFO: Created: latency-svc-7g9cc
Aug 27 02:30:07.751: INFO: Got endpoints: latency-svc-9jldr [2.047482991s]
Aug 27 02:30:07.778: INFO: Got endpoints: latency-svc-7g9cc [1.61409621s]
Aug 27 02:30:07.817: INFO: Created: latency-svc-bjk4t
Aug 27 02:30:07.900: INFO: Got endpoints: latency-svc-bjk4t [1.679625389s]
Aug 27 02:30:07.950: INFO: Created: latency-svc-g524t
Aug 27 02:30:08.073: INFO: Got endpoints: latency-svc-g524t [1.741742177s]
Aug 27 02:30:08.076: INFO: Created: latency-svc-9b5s2
Aug 27 02:30:08.096: INFO: Got endpoints: latency-svc-9b5s2 [1.519058962s]
Aug 27 02:30:08.160: INFO: Created: latency-svc-8mwvd
Aug 27 02:30:08.248: INFO: Got endpoints: latency-svc-8mwvd [1.632334882s]
Aug 27 02:30:08.279: INFO: Created: latency-svc-q67w8
Aug 27 02:30:08.324: INFO: Got endpoints: latency-svc-q67w8 [1.606199409s]
Aug 27 02:30:08.411: INFO: Created: latency-svc-dqdxl
Aug 27 02:30:08.412: INFO: Got endpoints: latency-svc-dqdxl [1.63245919s]
Aug 27 02:30:08.470: INFO: Created: latency-svc-5ws57
Aug 27 02:30:08.492: INFO: Got endpoints: latency-svc-5ws57 [1.636665986s]
Aug 27 02:30:08.577: INFO: Created: latency-svc-m7xgw
Aug 27 02:30:08.580: INFO: Got endpoints: latency-svc-m7xgw [1.578489665s]
Aug 27 02:30:08.625: INFO: Created: latency-svc-95ggr
Aug 27 02:30:08.643: INFO: Got endpoints: latency-svc-95ggr [1.4072427s]
Aug 27 02:30:08.738: INFO: Created: latency-svc-t7gbv
Aug 27 02:30:08.741: INFO: Got endpoints: latency-svc-t7gbv [1.426775144s]
Aug 27 02:30:08.832: INFO: Created: latency-svc-vrwtg
Aug 27 02:30:08.912: INFO: Got endpoints: latency-svc-vrwtg [1.465165148s]
Aug 27 02:30:08.919: INFO: Created: latency-svc-2htvf
Aug 27 02:30:08.961: INFO: Got endpoints: latency-svc-2htvf [1.394240156s]
Aug 27 02:30:09.056: INFO: Created: latency-svc-6mm6f
Aug 27 02:30:09.059: INFO: Got endpoints: latency-svc-6mm6f [1.425416209s]
Aug 27 02:30:09.115: INFO: Created: latency-svc-4bbwn
Aug 27 02:30:09.123: INFO: Got endpoints: latency-svc-4bbwn [1.371398036s]
Aug 27 02:30:09.148: INFO: Created: latency-svc-9vbhx
Aug 27 02:30:09.153: INFO: Got endpoints: latency-svc-9vbhx [1.375018405s]
Aug 27 02:30:09.200: INFO: Created: latency-svc-rg25d
Aug 27 02:30:09.214: INFO: Got endpoints: latency-svc-rg25d [1.313810353s]
Aug 27 02:30:09.244: INFO: Created: latency-svc-82s27
Aug 27 02:30:09.257: INFO: Got endpoints: latency-svc-82s27 [1.183883351s]
Aug 27 02:30:09.339: INFO: Created: latency-svc-nwxlw
Aug 27 02:30:09.347: INFO: Got endpoints: latency-svc-nwxlw [1.25045741s]
Aug 27 02:30:09.392: INFO: Created: latency-svc-b6824
Aug 27 02:30:09.407: INFO: Got endpoints: latency-svc-b6824 [1.158799884s]
Aug 27 02:30:09.435: INFO: Created: latency-svc-79jgz
Aug 27 02:30:09.481: INFO: Got endpoints: latency-svc-79jgz [1.156701609s]
Aug 27 02:30:09.500: INFO: Created: latency-svc-xbcb7
Aug 27 02:30:09.509: INFO: Got endpoints: latency-svc-xbcb7 [1.096902663s]
Aug 27 02:30:09.535: INFO: Created: latency-svc-xqr4g
Aug 27 02:30:09.545: INFO: Got endpoints: latency-svc-xqr4g [1.053277986s]
Aug 27 02:30:09.625: INFO: Created: latency-svc-8hh5m
Aug 27 02:30:09.639: INFO: Got endpoints: latency-svc-8hh5m [1.0587252s]
Aug 27 02:30:09.661: INFO: Created: latency-svc-hcjkc
Aug 27 02:30:09.678: INFO: Got endpoints: latency-svc-hcjkc [1.035271516s]
Aug 27 02:30:09.700: INFO: Created: latency-svc-dwgfl
Aug 27 02:30:09.714: INFO: Got endpoints: latency-svc-dwgfl [972.424796ms]
Aug 27 02:30:09.769: INFO: Created: latency-svc-kt44q
Aug 27 02:30:09.793: INFO: Got endpoints: latency-svc-kt44q [880.51961ms]
Aug 27 02:30:09.823: INFO: Created: latency-svc-gdhcz
Aug 27 02:30:09.840: INFO: Got endpoints: latency-svc-gdhcz [878.887912ms]
Aug 27 02:30:09.859: INFO: Created: latency-svc-dkgs8
Aug 27 02:30:09.906: INFO: Got endpoints: latency-svc-dkgs8 [847.106138ms]
Aug 27 02:30:09.935: INFO: Created: latency-svc-hk8b6
Aug 27 02:30:09.954: INFO: Got endpoints: latency-svc-hk8b6 [831.501632ms]
Aug 27 02:30:09.982: INFO: Created: latency-svc-tzt5s
Aug 27 02:30:09.991: INFO: Got endpoints: latency-svc-tzt5s [838.49928ms]
Aug 27 02:30:10.045: INFO: Created: latency-svc-nlpnm
Aug 27 02:30:10.057: INFO: Got endpoints: latency-svc-nlpnm [843.045226ms]
Aug 27 02:30:10.082: INFO: Created: latency-svc-wd6rl
Aug 27 02:30:10.099: INFO: Got endpoints: latency-svc-wd6rl [842.071703ms]
Aug 27 02:30:10.122: INFO: Created: latency-svc-54pj8
Aug 27 02:30:10.142: INFO: Got endpoints: latency-svc-54pj8 [795.170996ms]
Aug 27 02:30:10.200: INFO: Created: latency-svc-pfbvb
Aug 27 02:30:10.214: INFO: Got endpoints: latency-svc-pfbvb [807.357966ms]
Aug 27 02:30:10.240: INFO: Created: latency-svc-w45hh
Aug 27 02:30:10.297: INFO: Got endpoints: latency-svc-w45hh [815.91363ms]
Aug 27 02:30:10.379: INFO: Created: latency-svc-7tthf
Aug 27 02:30:10.390: INFO: Got endpoints: latency-svc-7tthf [880.256447ms]
Aug 27 02:30:10.417: INFO: Created: latency-svc-g6vbh
Aug 27 02:30:10.454: INFO: Got endpoints: latency-svc-g6vbh [908.965104ms]
Aug 27 02:30:10.571: INFO: Created: latency-svc-mshwq
Aug 27 02:30:10.626: INFO: Got endpoints: latency-svc-mshwq [986.662035ms]
Aug 27 02:30:10.666: INFO: Created: latency-svc-z89pv
Aug 27 02:30:10.714: INFO: Got endpoints: latency-svc-z89pv [1.03640214s]
Aug 27 02:30:10.732: INFO: Created: latency-svc-nqrpb
Aug 27 02:30:10.743: INFO: Got endpoints: latency-svc-nqrpb [1.028827639s]
Aug 27 02:30:10.771: INFO: Created: latency-svc-p48f7
Aug 27 02:30:10.785: INFO: Got endpoints: latency-svc-p48f7 [992.788218ms]
Aug 27 02:30:10.814: INFO: Created: latency-svc-5sfsh
Aug 27 02:30:10.858: INFO: Got endpoints: latency-svc-5sfsh [1.017882177s]
Aug 27 02:30:10.873: INFO: Created: latency-svc-gh5dg
Aug 27 02:30:10.900: INFO: Got endpoints: latency-svc-gh5dg [993.760048ms]
Aug 27 02:30:10.935: INFO: Created: latency-svc-d9bq2
Aug 27 02:30:10.948: INFO: Got endpoints: latency-svc-d9bq2 [993.346651ms]
Aug 27 02:30:11.002: INFO: Created: latency-svc-q6w66
Aug 27 02:30:11.021: INFO: Got endpoints: latency-svc-q6w66 [1.029288734s]
Aug 27 02:30:11.090: INFO: Created: latency-svc-rm928
Aug 27 02:30:11.127: INFO: Got endpoints: latency-svc-rm928 [1.070236038s]
Aug 27 02:30:11.139: INFO: Created: latency-svc-nmdx6
Aug 27 02:30:11.158: INFO: Got endpoints: latency-svc-nmdx6 [1.0588282s]
Aug 27 02:30:11.194: INFO: Created: latency-svc-5wk6s
Aug 27 02:30:11.206: INFO: Got endpoints: latency-svc-5wk6s [1.064282764s]
Aug 27 02:30:11.294: INFO: Created: latency-svc-26g4k
Aug 27 02:30:11.303: INFO: Got endpoints: latency-svc-26g4k [1.088384638s]
Aug 27 02:30:11.373: INFO: Created: latency-svc-6ghnp
Aug 27 02:30:11.387: INFO: Got endpoints: latency-svc-6ghnp [1.090003793s]
Aug 27 02:30:11.453: INFO: Created: latency-svc-hhv2l
Aug 27 02:30:11.489: INFO: Got endpoints: latency-svc-hhv2l [1.099388325s]
Aug 27 02:30:11.544: INFO: Created: latency-svc-m5cs8
Aug 27 02:30:11.613: INFO: Got endpoints: latency-svc-m5cs8 [1.158296831s]
Aug 27 02:30:11.614: INFO: Created: latency-svc-6dpnt
Aug 27 02:30:11.628: INFO: Got endpoints: latency-svc-6dpnt [1.001862889s]
Aug 27 02:30:11.667: INFO: Created: latency-svc-kvwt8
Aug 27 02:30:11.688: INFO: Got endpoints: latency-svc-kvwt8 [973.583781ms]
Aug 27 02:30:11.779: INFO: Created: latency-svc-bghnf
Aug 27 02:30:11.792: INFO: Got endpoints: latency-svc-bghnf [1.048820868s]
Aug 27 02:30:11.820: INFO: Created: latency-svc-4dm9k
Aug 27 02:30:11.838: INFO: Got endpoints: latency-svc-4dm9k [1.052703446s]
Aug 27 02:30:11.862: INFO: Created: latency-svc-vxv7f
Aug 27 02:30:11.954: INFO: Got endpoints: latency-svc-vxv7f [1.095540742s]
Aug 27 02:30:11.964: INFO: Created: latency-svc-lc4wx
Aug 27 02:30:12.007: INFO: Got endpoints: latency-svc-lc4wx [1.106935193s]
Aug 27 02:30:12.030: INFO: Created: latency-svc-469qs
Aug 27 02:30:12.049: INFO: Got endpoints: latency-svc-469qs [1.101173059s]
Aug 27 02:30:12.098: INFO: Created: latency-svc-gvzdj
Aug 27 02:30:12.115: INFO: Got endpoints: latency-svc-gvzdj [1.094079251s]
Aug 27 02:30:12.729: INFO: Created: latency-svc-qpc98
Aug 27 02:30:12.792: INFO: Got endpoints: latency-svc-qpc98 [1.664802447s]
Aug 27 02:30:12.927: INFO: Created: latency-svc-dwphh
Aug 27 02:30:12.954: INFO: Got endpoints: latency-svc-dwphh [1.795499382s]
Aug 27 02:30:13.054: INFO: Created: latency-svc-9gk2h
Aug 27 02:30:13.068: INFO: Got endpoints: latency-svc-9gk2h [1.861963126s]
Aug 27 02:30:13.178: INFO: Created: latency-svc-cvz87
Aug 27 02:30:13.200: INFO: Got endpoints: latency-svc-cvz87 [1.897532348s]
Aug 27 02:30:13.551: INFO: Created: latency-svc-6qjlz
Aug 27 02:30:13.708: INFO: Got endpoints: latency-svc-6qjlz [2.321083545s]
Aug 27 02:30:13.971: INFO: Created: latency-svc-5kbd8
Aug 27 02:30:14.003: INFO: Got endpoints: latency-svc-5kbd8 [2.514211583s]
Aug 27 02:30:14.441: INFO: Created: latency-svc-jbltr
Aug 27 02:30:15.046: INFO: Got endpoints: latency-svc-jbltr [3.433716834s]
Aug 27 02:30:15.341: INFO: Created: latency-svc-rvqwm
Aug 27 02:30:15.396: INFO: Got endpoints: latency-svc-rvqwm [3.768217989s]
Aug 27 02:30:15.396: INFO: Latencies: [96.864063ms 124.27169ms 233.19487ms 357.180163ms 410.369079ms 489.266283ms 517.267419ms 574.709468ms 680.583025ms 731.556644ms 795.170996ms 807.357966ms 815.91363ms 831.501632ms 838.49928ms 842.071703ms 843.045226ms 847.106138ms 854.165757ms 873.919778ms 878.887912ms 880.256447ms 880.51961ms 881.91243ms 902.65121ms 903.491664ms 904.797947ms 908.965104ms 910.52004ms 940.136253ms 947.189551ms 950.346976ms 950.961805ms 951.903714ms 957.498752ms 963.929074ms 966.002378ms 968.897026ms 969.145799ms 971.786194ms 972.424796ms 973.583781ms 976.696126ms 980.692258ms 981.168497ms 983.745131ms 986.662035ms 992.757371ms 992.788218ms 993.346651ms 993.760048ms 993.916001ms 1.001862889s 1.009648507s 1.01097235s 1.015632749s 1.016531421s 1.017882177s 1.020436396s 1.021507215s 1.026540487s 1.028827639s 1.029288734s 1.035271516s 1.03538057s 1.03640214s 1.043826864s 1.048820868s 1.052703446s 1.053277986s 1.0587252s 1.0588282s 1.062180812s 1.064282764s 1.067593899s 1.070236038s 1.088384638s 1.089576441s 1.090003793s 1.094079251s 1.095540742s 1.096902663s 1.099388325s 1.100617355s 1.101173059s 1.106935193s 1.118568211s 1.128915267s 1.156701609s 1.157068645s 1.158296831s 1.158799884s 1.16601696s 1.172633954s 1.175558041s 1.180567843s 1.182268803s 1.183883351s 1.188098797s 1.188350419s 1.199836513s 1.202452287s 1.211367836s 1.220433916s 1.226046035s 1.227625324s 1.23396855s 1.238998673s 1.25045741s 1.260025263s 1.27397449s 1.276023939s 1.288012558s 1.305344588s 1.313810353s 1.316039871s 1.321478139s 1.3297642s 1.33770536s 1.344229046s 1.350367657s 1.371398036s 1.375018405s 1.377147488s 1.394240156s 1.4072427s 1.418625964s 1.422069834s 1.425416209s 1.426775144s 1.465165148s 1.475458023s 1.489270065s 1.519058962s 1.519702723s 1.531272674s 1.535520195s 1.536067082s 1.551219321s 1.570182571s 1.578489665s 1.606199409s 1.61409621s 1.631215429s 1.632334882s 1.63245919s 1.635245056s 1.636665986s 1.664802447s 1.679625389s 1.741742177s 1.795499382s 1.82656957s 1.826673302s 1.83966487s 1.861963126s 1.897532348s 1.910274855s 1.923122158s 1.94269442s 1.971601383s 1.985274812s 2.003769444s 2.003804672s 2.007695223s 2.047482991s 2.080323025s 2.108655309s 2.321083545s 2.45266073s 2.514211583s 2.543297611s 2.557648883s 2.595652326s 2.620729212s 2.644202491s 2.713191993s 2.786575693s 2.870131925s 2.882169232s 2.912932578s 2.920310053s 2.923380009s 2.927842659s 2.933336826s 2.933402841s 2.946983603s 2.991905693s 3.016698637s 3.055193382s 3.241178836s 3.24119126s 3.339106153s 3.351411964s 3.433716834s 3.46094029s 3.478542237s 3.524581808s 3.534204829s 3.768217989s]
Aug 27 02:30:15.396: INFO: 50 %ile: 1.199836513s
Aug 27 02:30:15.396: INFO: 90 %ile: 2.912932578s
Aug 27 02:30:15.396: INFO: 99 %ile: 3.534204829s
Aug 27 02:30:15.396: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:30:15.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7215" for this suite.

• [SLOW TEST:26.850 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":254,"skipped":4278,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:30:15.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:30:53.302: INFO: Container started at 2020-08-27 02:30:28 +0000 UTC, pod became ready at 2020-08-27 02:30:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:30:53.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3467" for this suite.

• [SLOW TEST:37.664 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4285,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:30:53.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Aug 27 02:30:56.394: INFO: Waiting up to 5m0s for pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479" in namespace "downward-api-6295" to be "success or failure"
Aug 27 02:30:56.658: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Pending", Reason="", readiness=false. Elapsed: 264.168969ms
Aug 27 02:30:58.675: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28102968s
Aug 27 02:31:00.793: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399009701s
Aug 27 02:31:03.508: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Pending", Reason="", readiness=false. Elapsed: 7.113449169s
Aug 27 02:31:05.698: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Running", Reason="", readiness=true. Elapsed: 9.303562846s
Aug 27 02:31:08.030: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Running", Reason="", readiness=true. Elapsed: 11.636068152s
Aug 27 02:31:10.129: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.735159023s
STEP: Saw pod success
Aug 27 02:31:10.130: INFO: Pod "downward-api-5eb76557-3116-4b34-92be-3cd78d383479" satisfied condition "success or failure"
Aug 27 02:31:10.132: INFO: Trying to get logs from node jerma-worker2 pod downward-api-5eb76557-3116-4b34-92be-3cd78d383479 container dapi-container: 
STEP: delete the pod
Aug 27 02:31:11.553: INFO: Waiting for pod downward-api-5eb76557-3116-4b34-92be-3cd78d383479 to disappear
Aug 27 02:31:11.592: INFO: Pod downward-api-5eb76557-3116-4b34-92be-3cd78d383479 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:31:11.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6295" for this suite.

• [SLOW TEST:18.529 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4288,"failed":0}
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:31:11.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 27 02:31:13.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2498'
Aug 27 02:31:13.504: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 27 02:31:13.504: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 27 02:31:13.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-2498'
Aug 27 02:31:15.079: INFO: stderr: ""
Aug 27 02:31:15.079: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:31:15.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2498" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":257,"skipped":4288,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:31:15.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 27 02:31:17.078: INFO: Waiting up to 5m0s for pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511" in namespace "emptydir-4019" to be "success or failure"
Aug 27 02:31:17.258: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Pending", Reason="", readiness=false. Elapsed: 179.768726ms
Aug 27 02:31:19.448: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370173867s
Aug 27 02:31:21.471: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392532015s
Aug 27 02:31:23.754: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675686789s
Aug 27 02:31:26.171: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Pending", Reason="", readiness=false. Elapsed: 9.092593117s
Aug 27 02:31:28.452: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.373837368s
STEP: Saw pod success
Aug 27 02:31:28.452: INFO: Pod "pod-33401fb8-ef40-4fbe-8608-5e04261fc511" satisfied condition "success or failure"
Aug 27 02:31:28.457: INFO: Trying to get logs from node jerma-worker2 pod pod-33401fb8-ef40-4fbe-8608-5e04261fc511 container test-container: 
STEP: delete the pod
Aug 27 02:31:29.041: INFO: Waiting for pod pod-33401fb8-ef40-4fbe-8608-5e04261fc511 to disappear
Aug 27 02:31:29.182: INFO: Pod pod-33401fb8-ef40-4fbe-8608-5e04261fc511 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:31:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4019" for this suite.

• [SLOW TEST:13.771 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4297,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:31:29.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 27 02:31:42.057: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 27 02:31:57.502: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:31:57.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-639" for this suite.

• [SLOW TEST:28.304 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":259,"skipped":4303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:31:57.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:31:58.860: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 02:32:02.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092319, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:32:04.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092319, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092318, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:32:07.441: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:32:08.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2127" for this suite.
STEP: Destroying namespace "webhook-2127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.376 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":260,"skipped":4369,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:32:11.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:32:14.331: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:32:16.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3118" for this suite.

• [SLOW TEST:6.991 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":261,"skipped":4370,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:32:18.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:32:29.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7877" for this suite.
STEP: Destroying namespace "nsdeletetest-1195" for this suite.
Aug 27 02:32:29.933: INFO: Namespace nsdeletetest-1195 was already deleted
STEP: Destroying namespace "nsdeletetest-584" for this suite.

• [SLOW TEST:11.742 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":262,"skipped":4376,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:32:29.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 27 02:32:30.131: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102885 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 02:32:30.131: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102885 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 27 02:32:40.152: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102928 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 27 02:32:40.152: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102928 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 27 02:32:50.160: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102958 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 02:32:50.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102958 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 27 02:33:00.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102988 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 27 02:33:00.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-a d2cd49a0-c0cf-4e1b-a95f-463968831028 4102988 0 2020-08-27 02:32:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 27 02:33:10.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-b 02ff0c43-c9d8-48a2-b138-5453485c5fb4 4103018 0 2020-08-27 02:33:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 02:33:10.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-b 02ff0c43-c9d8-48a2-b138-5453485c5fb4 4103018 0 2020-08-27 02:33:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 27 02:33:20.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-b 02ff0c43-c9d8-48a2-b138-5453485c5fb4 4103046 0 2020-08-27 02:33:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 27 02:33:20.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8448 /api/v1/namespaces/watch-8448/configmaps/e2e-watch-test-configmap-b 02ff0c43-c9d8-48a2-b138-5453485c5fb4 4103046 0 2020-08-27 02:33:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:33:30.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8448" for this suite.

• [SLOW TEST:60.297 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":263,"skipped":4406,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:33:30.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1392.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1392.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1392.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1392.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1392.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 27 02:33:40.832: INFO: DNS probes using dns-1392/dns-test-6dc58662-501d-44b6-9ea4-69b45136a129 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:33:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1392" for this suite.

• [SLOW TEST:10.748 seconds]
[sig-network] DNS
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":264,"skipped":4420,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:33:40.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Aug 27 02:33:41.802: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:33:53.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8393" for this suite.

• [SLOW TEST:12.544 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":265,"skipped":4420,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:33:53.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Aug 27 02:33:54.490: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 27 02:33:54.565: INFO: Waiting for terminating namespaces to be deleted...
Aug 27 02:33:54.567: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Aug 27 02:33:54.689: INFO: kindnet-tfrcx from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.689: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:33:54.689: INFO: pod-init-cc90dd8d-4228-4cf1-8dd3-a78a4a3bc502 from init-container-8393 started at 2020-08-27 02:33:42 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.689: INFO: 	Container run1 ready: false, restart count 0
Aug 27 02:33:54.689: INFO: daemon-set-4l8wc from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.689: INFO: 	Container app ready: true, restart count 0
Aug 27 02:33:54.689: INFO: kube-proxy-lgd85 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.689: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 02:33:54.689: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Aug 27 02:33:54.702: INFO: daemon-set-cxv46 from daemonsets-9371 started at 2020-08-20 20:09:22 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.702: INFO: 	Container app ready: true, restart count 0
Aug 27 02:33:54.702: INFO: kindnet-gxck9 from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.702: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 27 02:33:54.702: INFO: kube-proxy-ckhpn from kube-system started at 2020-08-15 09:37:48 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.702: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 27 02:33:54.702: INFO: test-recreate-deployment-5f94c574ff-k4dkm from deployment-5601 started at 2020-08-23 04:50:56 +0000 UTC (1 container statuses recorded)
Aug 27 02:33:54.702: INFO: 	Container httpd ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e50fd879-3c10-49bb-9ec2-f75db121e33b 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-e50fd879-3c10-49bb-9ec2-f75db121e33b off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e50fd879-3c10-49bb-9ec2-f75db121e33b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:39:08.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2985" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:314.723 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":266,"skipped":4425,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:39:08.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 02:39:09.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8" in namespace "downward-api-3742" to be "success or failure"
Aug 27 02:39:10.087: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Pending", Reason="", readiness=false. Elapsed: 257.086953ms
Aug 27 02:39:12.090: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259999513s
Aug 27 02:39:14.259: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428703158s
Aug 27 02:39:16.307: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476666847s
Aug 27 02:39:18.439: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608884339s
Aug 27 02:39:20.576: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.746261072s
STEP: Saw pod success
Aug 27 02:39:20.576: INFO: Pod "downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8" satisfied condition "success or failure"
Aug 27 02:39:20.580: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8 container client-container: 
STEP: delete the pod
Aug 27 02:39:20.931: INFO: Waiting for pod downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8 to disappear
Aug 27 02:39:20.959: INFO: Pod downwardapi-volume-799c74ce-f1d9-4f13-8dbe-2967accbdea8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:39:20.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3742" for this suite.

• [SLOW TEST:12.763 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4427,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:39:21.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:39:32.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7129" for this suite.

• [SLOW TEST:11.748 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":268,"skipped":4454,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:39:32.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 27 02:39:34.404: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 27 02:39:36.460: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:39:38.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 27 02:39:40.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734092774, loc:(*time.Location)(0x7931640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 27 02:39:43.696: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:39:55.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3673" for this suite.
STEP: Destroying namespace "webhook-3673-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:23.931 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":269,"skipped":4461,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:39:56.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Aug 27 02:39:57.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 27 02:39:57.767: INFO: stderr: ""
Aug 27 02:39:57.767: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T15:20:25Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:39:57.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3907" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":270,"skipped":4480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:39:57.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 27 02:39:58.446: INFO: Waiting up to 5m0s for pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305" in namespace "emptydir-3657" to be "success or failure"
Aug 27 02:39:58.720: INFO: Pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305": Phase="Pending", Reason="", readiness=false. Elapsed: 273.875938ms
Aug 27 02:40:00.724: INFO: Pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277771318s
Aug 27 02:40:02.728: INFO: Pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28164962s
Aug 27 02:40:04.732: INFO: Pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28600689s
STEP: Saw pod success
Aug 27 02:40:04.732: INFO: Pod "pod-611775c8-33cd-4be0-8328-6acbd29bf305" satisfied condition "success or failure"
Aug 27 02:40:04.736: INFO: Trying to get logs from node jerma-worker pod pod-611775c8-33cd-4be0-8328-6acbd29bf305 container test-container: 
STEP: delete the pod
Aug 27 02:40:04.854: INFO: Waiting for pod pod-611775c8-33cd-4be0-8328-6acbd29bf305 to disappear
Aug 27 02:40:04.873: INFO: Pod pod-611775c8-33cd-4be0-8328-6acbd29bf305 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:40:04.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3657" for this suite.

• [SLOW TEST:7.104 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4507,"failed":0}
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:40:04.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Aug 27 02:40:05.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 27 02:40:05.512: INFO: stderr: ""
Aug 27 02:40:05.512: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:40:05.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3908" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":272,"skipped":4507,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:40:05.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4943
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 27 02:40:05.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 27 02:40:37.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostname&protocol=udp&host=10.244.2.83&port=8081&tries=1'] Namespace:pod-network-test-4943 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 02:40:37.406: INFO: >>> kubeConfig: /root/.kube/config
I0827 02:40:37.435936       6 log.go:172] (0xc002ab3ef0) (0xc002e23c20) Create stream
I0827 02:40:37.435963       6 log.go:172] (0xc002ab3ef0) (0xc002e23c20) Stream added, broadcasting: 1
I0827 02:40:37.438182       6 log.go:172] (0xc002ab3ef0) Reply frame received for 1
I0827 02:40:37.438214       6 log.go:172] (0xc002ab3ef0) (0xc0009694a0) Create stream
I0827 02:40:37.438225       6 log.go:172] (0xc002ab3ef0) (0xc0009694a0) Stream added, broadcasting: 3
I0827 02:40:37.439179       6 log.go:172] (0xc002ab3ef0) Reply frame received for 3
I0827 02:40:37.439207       6 log.go:172] (0xc002ab3ef0) (0xc002e23cc0) Create stream
I0827 02:40:37.439221       6 log.go:172] (0xc002ab3ef0) (0xc002e23cc0) Stream added, broadcasting: 5
I0827 02:40:37.440177       6 log.go:172] (0xc002ab3ef0) Reply frame received for 5
I0827 02:40:37.503679       6 log.go:172] (0xc002ab3ef0) Data frame received for 3
I0827 02:40:37.503704       6 log.go:172] (0xc0009694a0) (3) Data frame handling
I0827 02:40:37.503721       6 log.go:172] (0xc0009694a0) (3) Data frame sent
I0827 02:40:37.504356       6 log.go:172] (0xc002ab3ef0) Data frame received for 5
I0827 02:40:37.504396       6 log.go:172] (0xc002e23cc0) (5) Data frame handling
I0827 02:40:37.504423       6 log.go:172] (0xc002ab3ef0) Data frame received for 3
I0827 02:40:37.504437       6 log.go:172] (0xc0009694a0) (3) Data frame handling
I0827 02:40:37.506252       6 log.go:172] (0xc002ab3ef0) Data frame received for 1
I0827 02:40:37.506272       6 log.go:172] (0xc002e23c20) (1) Data frame handling
I0827 02:40:37.506283       6 log.go:172] (0xc002e23c20) (1) Data frame sent
I0827 02:40:37.506299       6 log.go:172] (0xc002ab3ef0) (0xc002e23c20) Stream removed, broadcasting: 1
I0827 02:40:37.506316       6 log.go:172] (0xc002ab3ef0) Go away received
I0827 02:40:37.506505       6 log.go:172] (0xc002ab3ef0) (0xc002e23c20) Stream removed, broadcasting: 1
I0827 02:40:37.506533       6 log.go:172] (0xc002ab3ef0) (0xc0009694a0) Stream removed, broadcasting: 3
I0827 02:40:37.506544       6 log.go:172] (0xc002ab3ef0) (0xc002e23cc0) Stream removed, broadcasting: 5
Aug 27 02:40:37.506: INFO: Waiting for responses: map[]
Aug 27 02:40:37.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostname&protocol=udp&host=10.244.1.242&port=8081&tries=1'] Namespace:pod-network-test-4943 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 27 02:40:37.510: INFO: >>> kubeConfig: /root/.kube/config
I0827 02:40:37.541524       6 log.go:172] (0xc002a4ab00) (0xc002008320) Create stream
I0827 02:40:37.541562       6 log.go:172] (0xc002a4ab00) (0xc002008320) Stream added, broadcasting: 1
I0827 02:40:37.543456       6 log.go:172] (0xc002a4ab00) Reply frame received for 1
I0827 02:40:37.543499       6 log.go:172] (0xc002a4ab00) (0xc000969860) Create stream
I0827 02:40:37.543519       6 log.go:172] (0xc002a4ab00) (0xc000969860) Stream added, broadcasting: 3
I0827 02:40:37.544435       6 log.go:172] (0xc002a4ab00) Reply frame received for 3
I0827 02:40:37.544462       6 log.go:172] (0xc002a4ab00) (0xc000969d60) Create stream
I0827 02:40:37.544470       6 log.go:172] (0xc002a4ab00) (0xc000969d60) Stream added, broadcasting: 5
I0827 02:40:37.545403       6 log.go:172] (0xc002a4ab00) Reply frame received for 5
I0827 02:40:37.607251       6 log.go:172] (0xc002a4ab00) Data frame received for 3
I0827 02:40:37.607290       6 log.go:172] (0xc000969860) (3) Data frame handling
I0827 02:40:37.607316       6 log.go:172] (0xc000969860) (3) Data frame sent
I0827 02:40:37.607545       6 log.go:172] (0xc002a4ab00) Data frame received for 5
I0827 02:40:37.607569       6 log.go:172] (0xc000969d60) (5) Data frame handling
I0827 02:40:37.607909       6 log.go:172] (0xc002a4ab00) Data frame received for 3
I0827 02:40:37.607928       6 log.go:172] (0xc000969860) (3) Data frame handling
I0827 02:40:37.609364       6 log.go:172] (0xc002a4ab00) Data frame received for 1
I0827 02:40:37.609388       6 log.go:172] (0xc002008320) (1) Data frame handling
I0827 02:40:37.609408       6 log.go:172] (0xc002008320) (1) Data frame sent
I0827 02:40:37.609425       6 log.go:172] (0xc002a4ab00) (0xc002008320) Stream removed, broadcasting: 1
I0827 02:40:37.609444       6 log.go:172] (0xc002a4ab00) Go away received
I0827 02:40:37.609545       6 log.go:172] (0xc002a4ab00) (0xc002008320) Stream removed, broadcasting: 1
I0827 02:40:37.609566       6 log.go:172] (0xc002a4ab00) (0xc000969860) Stream removed, broadcasting: 3
I0827 02:40:37.609595       6 log.go:172] (0xc002a4ab00) (0xc000969d60) Stream removed, broadcasting: 5
Aug 27 02:40:37.609: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:40:37.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4943" for this suite.

• [SLOW TEST:32.098 seconds]
[sig-network] Networking
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4520,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:40:37.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 27 02:40:38.871: INFO: Waiting up to 5m0s for pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a" in namespace "emptydir-4619" to be "success or failure"
Aug 27 02:40:39.098: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a": Phase="Pending", Reason="", readiness=false. Elapsed: 227.411692ms
Aug 27 02:40:41.102: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23104845s
Aug 27 02:40:43.278: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407101305s
Aug 27 02:40:45.643: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a": Phase="Running", Reason="", readiness=true. Elapsed: 6.772117945s
Aug 27 02:40:47.646: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.775450785s
STEP: Saw pod success
Aug 27 02:40:47.646: INFO: Pod "pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a" satisfied condition "success or failure"
Aug 27 02:40:47.648: INFO: Trying to get logs from node jerma-worker pod pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a container test-container: 
STEP: delete the pod
Aug 27 02:40:47.706: INFO: Waiting for pod pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a to disappear
Aug 27 02:40:47.924: INFO: Pod pod-ac02631a-10eb-4309-9781-aba1f2b7cd6a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:40:47.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4619" for this suite.

• [SLOW TEST:10.313 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4528,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:40:47.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Aug 27 02:40:48.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960" in namespace "projected-4907" to be "success or failure"
Aug 27 02:40:49.355: INFO: Pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960": Phase="Pending", Reason="", readiness=false. Elapsed: 388.833439ms
Aug 27 02:40:51.359: INFO: Pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392665554s
Aug 27 02:40:53.389: INFO: Pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960": Phase="Running", Reason="", readiness=true. Elapsed: 4.422709078s
Aug 27 02:40:55.392: INFO: Pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.425799751s
STEP: Saw pod success
Aug 27 02:40:55.392: INFO: Pod "downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960" satisfied condition "success or failure"
Aug 27 02:40:55.394: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960 container client-container: 
STEP: delete the pod
Aug 27 02:40:55.602: INFO: Waiting for pod downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960 to disappear
Aug 27 02:40:55.640: INFO: Pod downwardapi-volume-9e6f22f5-feec-4de7-9f86-1a19a8360960 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:40:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4907" for this suite.

• [SLOW TEST:7.716 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4538,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:40:55.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-ztk9
STEP: Creating a pod to test atomic-volume-subpath
Aug 27 02:40:55.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ztk9" in namespace "subpath-2518" to be "success or failure"
Aug 27 02:40:55.864: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.241787ms
Aug 27 02:40:57.948: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104456838s
Aug 27 02:40:59.951: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 4.107661224s
Aug 27 02:41:01.955: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 6.111369553s
Aug 27 02:41:03.959: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 8.115732521s
Aug 27 02:41:05.963: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 10.119730509s
Aug 27 02:41:07.975: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 12.131785227s
Aug 27 02:41:09.979: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 14.135911052s
Aug 27 02:41:11.983: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 16.139745953s
Aug 27 02:41:13.988: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 18.144221323s
Aug 27 02:41:15.991: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 20.147700886s
Aug 27 02:41:17.995: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 22.151463303s
Aug 27 02:41:20.044: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Running", Reason="", readiness=true. Elapsed: 24.200673966s
Aug 27 02:41:22.048: INFO: Pod "pod-subpath-test-secret-ztk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.204322679s
STEP: Saw pod success
Aug 27 02:41:22.048: INFO: Pod "pod-subpath-test-secret-ztk9" satisfied condition "success or failure"
Aug 27 02:41:22.051: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-ztk9 container test-container-subpath-secret-ztk9: 
STEP: delete the pod
Aug 27 02:41:22.100: INFO: Waiting for pod pod-subpath-test-secret-ztk9 to disappear
Aug 27 02:41:22.113: INFO: Pod pod-subpath-test-secret-ztk9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-ztk9
Aug 27 02:41:22.113: INFO: Deleting pod "pod-subpath-test-secret-ztk9" in namespace "subpath-2518"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:41:22.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2518" for this suite.

• [SLOW TEST:26.474 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4539,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:41:22.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-523
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Aug 27 02:41:22.221: INFO: Found 0 stateful pods, waiting for 3
Aug 27 02:41:33.201: INFO: Found 2 stateful pods, waiting for 3
Aug 27 02:41:42.226: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:41:42.226: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:41:42.226: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 02:41:52.225: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:41:52.225: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:41:52.225: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 27 02:41:52.249: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 27 02:42:02.342: INFO: Updating stateful set ss2
Aug 27 02:42:02.373: INFO: Waiting for Pod statefulset-523/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 27 02:42:14.793: INFO: Found 2 stateful pods, waiting for 3
Aug 27 02:42:24.866: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:42:24.866: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:42:24.866: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 27 02:42:34.799: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:42:34.799: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 27 02:42:34.799: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 27 02:42:34.822: INFO: Updating stateful set ss2
Aug 27 02:42:34.962: INFO: Waiting for Pod statefulset-523/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 02:42:44.986: INFO: Updating stateful set ss2
Aug 27 02:42:45.001: INFO: Waiting for StatefulSet statefulset-523/ss2 to complete update
Aug 27 02:42:45.001: INFO: Waiting for Pod statefulset-523/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 27 02:42:55.078: INFO: Waiting for StatefulSet statefulset-523/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Aug 27 02:43:05.009: INFO: Deleting all statefulset in ns statefulset-523
Aug 27 02:43:05.011: INFO: Scaling statefulset ss2 to 0
Aug 27 02:43:35.043: INFO: Waiting for statefulset status.replicas updated to 0
Aug 27 02:43:35.046: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:43:35.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-523" for this suite.

• [SLOW TEST:132.990 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":277,"skipped":4550,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Aug 27 02:43:35.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-802b251e-85e0-4196-bdaa-f972222fe63e
STEP: Creating a pod to test consume secrets
Aug 27 02:43:35.556: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f" in namespace "projected-7965" to be "success or failure"
Aug 27 02:43:35.620: INFO: Pod "pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.621339ms
Aug 27 02:43:37.932: INFO: Pod "pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375976111s
Aug 27 02:43:39.944: INFO: Pod "pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.388201263s
STEP: Saw pod success
Aug 27 02:43:39.944: INFO: Pod "pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f" satisfied condition "success or failure"
Aug 27 02:43:39.946: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f container projected-secret-volume-test: 
STEP: delete the pod
Aug 27 02:43:40.201: INFO: Waiting for pod pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f to disappear
Aug 27 02:43:40.212: INFO: Pod pod-projected-secrets-21a27cc9-109e-41cb-af3b-9c9ec3f9653f no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Aug 27 02:43:40.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7965" for this suite.

• [SLOW TEST:5.235 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.11-rc.1.3+564c2018c1ea15/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4553,"failed":0}
SSSSSSSSSSSSSAug 27 02:43:40.348: INFO: Running AfterSuite actions on all nodes
Aug 27 02:43:40.348: INFO: Running AfterSuite actions on node 1
Aug 27 02:43:40.348: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4566,"failed":0}

Ran 278 of 4844 Specs in 5922.758 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS